核心概念
Large language models are improved in reasoning abilities through contrastive prompting, demonstrating significant enhancements in performance across various tasks.
摘要
Contrastive prompting enhances large language models' reasoning capabilities by generating both correct and incorrect answers. This method outperforms zero-shot and few-shot prompting techniques, achieving better results in arithmetic, commonsense, and symbolic reasoning tasks without the need for manual labeling of examples. The approach seamlessly integrates with existing methods, showing promising results compared to state-of-the-art techniques.
統計資料
Zero-shot CoT improves accuracy on GSM8K from 35.9% to 88.8%.
AQUA-RAT accuracy increased from 41.3% to 62.2% with GPT-4 model.
引述
"Prompting methods play a crucial role in enhancing the capabilities of pre-trained large language models." - Liang Yao