Temel Kavramlar
Large language models are improved in reasoning abilities through contrastive prompting, demonstrating significant enhancements in performance across various tasks.
Özet
Contrastive prompting enhances large language models' reasoning capabilities by generating both correct and incorrect answers. This method outperforms zero-shot and few-shot prompting techniques, achieving better results in arithmetic, commonsense, and symbolic reasoning tasks without the need for manual labeling of examples. The approach seamlessly integrates with existing methods, showing promising results compared to state-of-the-art techniques.
İstatistikler
Zero-shot CoT improves accuracy on GSM8K from 35.9% to 88.8%.
AQUA-RAT accuracy increased from 41.3% to 62.2% with GPT-4 model.
Alıntılar
"Prompting methods play a crucial role in enhancing the capabilities of pre-trained large language models." - Liang Yao