Generating abstractive summaries that comprehensively cover diverse perspectives without underrepresenting certain groups.
Die gegenseitige Information zwischen Quelltexten und Zusammenfassungen dient als universelles und aufgabenagnostisches Maß dafür, wie gut ein Zusammenfasser nützliche Informationen für Entscheidungen bewahrt.
Mutual information between source texts and summaries is a task-agnostic measure of summarizer effectiveness.
Leveraging relation triples for interpretable summarization.
LLMs struggle to cover diverse information effectively, highlighting the challenges in multi-document summarization.
Automated metric BOOOOKSCORE evaluates book-length summarization coherence, revealing insights on LLM performance.
Small-scale models can achieve competitive summarization results without relying on large language models or human-written references.
The author proposes COSMIC as a task-oriented evaluation metric based on mutual information between source texts and summaries. It correlates well with human judgment-based metrics and predicts downstream task performance effectively.