The paper proposes a novel approach called Prompt Space to address the lack of a solid mathematical solution for determining optimal prompts in prompt engineering. Prompt Space utilizes text embeddings and matrix decomposition to obtain basis vectors, which are then used to construct a space for representing all prompts.
The key highlights and insights are:
Prompt Space outperforms state-of-the-art prompt paradigms, including Chain of Thought (CoT), Zero-CoT, and In-context learning, on ten public reasoning benchmarks. Notably, without the help of the CoT method and the "Let's think step by step" prompt, Prompt Space shows superior performance over the few-shot method.
Prompt Space provides a robust and effective mathematical framework for selecting simple and effective prompts, marking a significant step towards improving prompt engineering for a wide variety of applications in large language models.
The paper investigates the impact of the number of basis questions on reasoning tasks and identifies the relationship between the selected questions and the reasoning ability of large language models. It also explores how to determine the optimal number of exemplars for each reasoning task.
Extensive experiments demonstrate that Prompt Space establishes a reliable and mathematical methodology for selecting simple and effective prompts, outperforming state-of-the-art methods on arithmetic, commonsense, and symbolic reasoning tasks.
他の言語に翻訳
原文コンテンツから
arxiv.org
深掘り質問