NoMIRACL introduces a dataset for evaluating LLM robustness in RAG across 18 languages. It measures hallucination and error rates using two subsets: non-relevant and relevant. Most LLMs struggle to balance both capacities, with GPT-4 showing the best tradeoff. Mistral provides explanations but has high error rates. Different LLMs exhibit various patterns in response generation.
На другой язык
из исходного контента
arxiv.org
Дополнительные вопросы