The study examines position bias in Large Language Models (LLMs) for zero-shot abstractive summarization. Position bias is defined as the unfair prioritization of information from certain parts of the input text, leading to undesirable behavior. Various LLM models and pretrained encoder-decoder models are analyzed on real-world datasets like CNN/DM, Reddit TL;DR, News Summary, and XSum. The research provides insights into model performance and position bias, highlighting the challenges and opportunities in leveraging LLMs for effective abstractive summarization.
A otro idioma
del contenido fuente
arxiv.org
Ideas clave extraídas de
by Anshuman Chh... a las arxiv.org 03-18-2024
https://arxiv.org/pdf/2401.01989.pdfConsultas más profundas