The study examines position bias in Large Language Models (LLMs) for zero-shot abstractive summarization. Position bias is defined as the unfair prioritization of information from certain parts of the input text, leading to undesirable behavior. Various LLM models and pretrained encoder-decoder models are analyzed on real-world datasets like CNN/DM, Reddit TL;DR, News Summary, and XSum. The research provides insights into model performance and position bias, highlighting the challenges and opportunities in leveraging LLMs for effective abstractive summarization.
To Another Language
from source content
arxiv.org
Principais Insights Extraídos De
by Anshuman Chh... às arxiv.org 03-18-2024
https://arxiv.org/pdf/2401.01989.pdfPerguntas Mais Profundas