NoMIRACL introduces a dataset for evaluating LLM robustness in RAG across 18 languages. It measures hallucination and error rates using two subsets: non-relevant and relevant. Most LLMs struggle to balance both capacities, with GPT-4 showing the best tradeoff. Mistral provides explanations but has high error rates. Different LLMs exhibit various patterns in response generation.
Vers une autre langue
à partir du contenu source
arxiv.org
Questions plus approfondies