Enhancing Contextual Understanding in Large Language Models through Contrastive Decoding
Introducing a novel contrastive decoding approach that leverages both relevant and irrelevant contexts to enhance large language models' ability to balance parametric and non-parametric knowledge sources during text generation.