The paper presents a method called "Rephrase and Respond" (RaR) that allows large language models (LLMs) to rephrase and expand on questions posed by humans, and then provide responses in a single prompt. This approach aims to address the disparity between human and LLM thought frames, which can lead to LLMs misinterpreting seemingly unambiguous questions.
The key insights and findings are:
LLMs can exhibit their own frames of thought that differ from those of humans, leading to unexpected interpretations of questions. This is demonstrated through examples where LLMs like GPT-4 provide incorrect responses due to ambiguities in the original questions.
The RaR method, where the LLM first rephrases and expands on the question before responding, consistently improves the performance of various LLMs, including GPT-4, GPT-3.5, and Vicuna, across a diverse set of reasoning tasks.
Variations of the RaR prompt, with slightly different wording, also remain effective, indicating the robustness of the approach.
More advanced LLMs, such as GPT-4, benefit the most from the RaR method, while less complex models like Vicuna see more modest improvements.
The paper introduces a two-step variant of RaR, where a rephrasing LLM first generates a clarified question, which is then used by a responding LLM. This allows stronger LLMs to assist weaker ones in improving question comprehension.
The RaR method is shown to be complementary to the Chain-of-Thought (CoT) prompting technique, and the two can be combined to achieve even better performance.
Overall, the paper demonstrates that allowing LLMs to rephrase and expand on questions can be a simple yet effective way to enhance their reasoning capabilities in the zero-shot setting.
他の言語に翻訳
原文コンテンツから
arxiv.org
深掘り質問