The study examines position bias in Large Language Models (LLMs) for zero-shot abstractive summarization. Position bias is defined as the unfair prioritization of information from certain parts of the input text, leading to undesirable behavior. Various LLM models and pretrained encoder-decoder models are analyzed on real-world datasets like CNN/DM, Reddit TL;DR, News Summary, and XSum. The research provides insights into model performance and position bias, highlighting the challenges and opportunities in leveraging LLMs for effective abstractive summarization.
Egy másik nyelvre
a forrásanyagból
arxiv.org
Mélyebb kérdések