The content explores the use of knowledge graphs to address hallucinations in Large Language Models (LLMs). It reviews various strategies for augmenting LLMs with external knowledge, focusing on reducing hallucinations. The methods are categorized into Knowledge-Aware Inference, Knowledge-Aware Learning, and Knowledge-Aware Validation. Different techniques such as KG-augmented Retrieval, Reasoning, and Generation are discussed. The effectiveness of these methods is analyzed through performance metrics like accuracy, top-k, MRR, Hits@1, and human evaluation. Future research directions include improving KG quality, MoE LLMs optimization, symbolic-subsymbolic unification, and causality-awareness integration.
To Another Language
from source content
arxiv.org
ข้อมูลเชิงลึกที่สำคัญจาก
by Garima Agraw... ที่ arxiv.org 03-19-2024
https://arxiv.org/pdf/2311.07914.pdfสอบถามเพิ่มเติม