Retrieval-augmented language models (LMs) are proposed as the next generation of LMs to address limitations faced by parametric LMs. The paper discusses the challenges faced by parametric LMs and the potential benefits of retrieval-augmented LMs. It emphasizes the need for advancements in architecture, training methodologies, and infrastructure to promote the adoption of retrieval-augmented LMs across diverse domains.
The content delves into the weaknesses of parametric LMs such as factual inaccuracies, difficulty in verification, challenges in adapting to new data distributions, prohibitively large model sizes, and more. It highlights how retrieval-augmented LMs can mitigate these issues by leveraging external datastores during inference.
The paper outlines a roadmap for advancing retrieval-augmented LMs by rethinking retrieval and datastores, enhancing interactions between retrievers and LMs, and building better systems for scaling and adaptation. It emphasizes collaborative efforts across interdisciplinary areas to achieve these advancements.
翻譯成其他語言
從原文內容
arxiv.org
深入探究