Core Concepts
Prompting techniques have emerged as a powerful tool for guiding autoregressive large language models (LLMs) towards desired outcomes, replacing the need for costly fine-tuning.
Abstract
This paper provides a comprehensive survey of the current literature on prompting techniques for autoregressive large language models (LLMs). The authors first introduce the necessary background on language models and prompting, followed by a detailed taxonomy of prompting methods based on two key dimensions: the level of human involvement in prompt creation and the specific types of prompts.
The survey covers a wide range of prompting techniques, including:
Hand-crafted prompts: These are prompts created manually based on human intuition, either in a zero-shot or few-shot setting.
Automated prompts:
Discrete prompts: These are prompts where the input to the LLM is still in the form of actual text, generated through techniques like mining, paraphrasing, and gradient-based search.
Continuous prompts: These are prompts defined in the embedding space of the LLM, with trainable parameters that can be optimized.
Task-based prompts: These prompts are focused solely on the downstream task objective.
Generate-auxiliary prompts:
Chain of thought (CoT) prompts: These prompts elicit a coherent series of intermediate reasoning steps from the LLM, leading to the final answer.
Generate-knowledge prompts: These prompts generate task-specific knowledge to facilitate the downstream task.
Resource/Tools augmented prompts: These prompts integrate external resources and tools to enhance the effectiveness of prompting.
The paper also discusses several open problems and future research directions in the field of prompting, including addressing sub-optimal prompts, handling structured data, answer engineering, and mitigating prompt injection attacks.