Temel Kavramlar
Large language models often fail to correctly understand non-affirmative statements, particularly those involving hypothetical scenarios, and are susceptible to knowledge conflicts when answering questions based on such contexts.
Özet
The paper investigates the reading comprehension and context-faithfulness capabilities of large language models (LLMs), focusing on their ability to understand non-affirmative statements and handle knowledge conflicts.
Key highlights:
- To accurately assess LLMs' natural language understanding (NLU) abilities, the authors propose using "imaginary" data that is independent of the models' parametric knowledge, avoiding distortions caused by knowledge conflicts.
- Evaluating LLMs on imaginary data, the authors find that the models often fail to correctly understand non-affirmative statements, particularly those involving hypothetical scenarios expressed through modals and conditionals.
- The authors further investigate the LLMs' context-faithfulness by comparing their performance on imaginary, supported, and contradicting data. They find that the more semantically involved the context, the more susceptible the models are to knowledge conflicts, often resorting to their internal knowledge rather than relying exclusively on the provided text.
- The authors suggest that in the quest for trustworthy systems, further work should be devoted to both the text-understanding and text-faithfulness aspects of LLMs, and their interaction.
İstatistikler
The authors use simple, single-sentence contexts to isolate the effects of semantic modifications and knowledge conflicts.
Alıntılar
"Crucially, these phenomena also trigger the LLMs' vulnerability to knowledge-conflicts again. In particular, while some models prove virtually unaffected by knowledge conflicts in affirmative and negative contexts, when faced with more semantically involved modal and conditional environments, they often fail to separate the text from their internal knowledge."
"Facing modal and conditional semantic modifications, the models often overlook them, behaving as if the statement is affirmative."