The study examines position bias in Large Language Models (LLMs) for zero-shot abstractive summarization. Position bias is defined as the unfair prioritization of information from certain parts of the input text, leading to undesirable behavior. Various LLM models and pretrained encoder-decoder models are analyzed on real-world datasets like CNN/DM, Reddit TL;DR, News Summary, and XSum. The research provides insights into model performance and position bias, highlighting the challenges and opportunities in leveraging LLMs for effective abstractive summarization.
toiselle kielelle
lähdeaineistosta
arxiv.org
Tärkeimmät oivallukset
by Anshuman Chh... klo arxiv.org 03-18-2024
https://arxiv.org/pdf/2401.01989.pdfSyvällisempiä Kysymyksiä