The paper introduces Rowen, a framework that aims to enhance the factual accuracy of large language model (LLM) outputs by integrating parametric and external knowledge.
The key highlights are:
Rowen employs a consistency-based hallucination detection module that assesses the reliability of the initial response generated by the LLM's internal reasoning. This module evaluates the semantic inconsistencies in responses across different languages and models to detect potential hallucinations.
When high uncertainties are detected, Rowen triggers a retrieval process to fetch relevant external information to rectify the reasoning and correct any inaccuracies in the initial response. This helps balance the use of parametric knowledge within LLMs and external information.
To reduce external hallucinations, Rowen minimizes the risk of incorporating erroneous information by optimizing the retrieval process. If the perturbed answers convey consistent content, suggesting that the LLM is capable of generating the correct answer itself, Rowen directly adopts the original answer produced by internal reasoning.
Comprehensive experiments on the TruthfulQA and StrategyQA datasets demonstrate that Rowen significantly outperforms state-of-the-art baselines in both detecting and mitigating hallucinated content within LLM outputs.
Rowen also exhibits strong scalability, performing well when applied to open-source LLMs and on datasets with intermediate-length answers.
إلى لغة أخرى
من محتوى المصدر
arxiv.org
الرؤى الأساسية المستخلصة من
by Hanxing Ding... في arxiv.org 10-01-2024
https://arxiv.org/pdf/2402.10612.pdfاستفسارات أعمق