Quantifying the Tension Between Large Language Models' Internal Knowledge and Retrieved Information in Retrieval-Augmented Generation
There is an inherent tension between a large language model's internal prior knowledge and the information presented in retrieved context, which can lead to unpredictable model behavior when the two sources disagree.