核心概念
Large language models (LLMs) exhibit hallucinations in real-world interactions, necessitating a novel benchmark like HaluEval-Wild to assess and enhance their reliability.
要約
HaluEval-Wild introduces a benchmark to evaluate LLM hallucinations in real-world settings. It collects challenging user queries from datasets like ShareGPT, categorizes them into five types, and synthesizes reference answers using GPT-4 and RAG. The benchmark highlights the nuanced challenge of balancing model performance with reliability, especially in knowledge-distilled models. Various LLMs are evaluated on the benchmark, revealing differences in hallucination rates. The study emphasizes the importance of understanding and improving LLM reliability in dynamic user interactions.
統計
Alpaca 7B shows a hallucination rate of 99.20%.
GPT-4 Turbo has the lowest average hallucination rate of 18.64%.