This paper proposes an active learning framework to effectively and efficiently mitigate hallucinations in large language models (LLMs) for text summarization by selecting diverse hallucination samples for annotation and finetuning.
This survey explores process-oriented automatic text summarization and the impact of Large Language Models (LLMs) on ATS methods.