Core Concepts
Role-play prompting enhances LLM reasoning abilities, outperforming standard zero-shot and Zero-Shot-CoT approaches.
Abstract
Modern large language models (LLMs) exhibit remarkable role-playing capabilities, enriching user experiences. Role-play prompting improves reasoning across diverse benchmarks, surpassing standard zero-shot and Zero-Shot-CoT methods. The two-stage framework involves constructing task-specific role-play prompts and eliciting responses within established roles. Results demonstrate the efficacy of role-play prompting in enhancing LLM reasoning capabilities.
Stats
In experiments using ChatGPT, accuracy on AQuA rises from 53.5% to 63.8%.
In experiments using ChatGPT, accuracy on Last Letter rises from 23.8% to 84.2%.
Quotes
"Our results demonstrate consistent improvements over the zero-shot baseline on the majority of datasets."
"Role-play prompting acts as a more effective trigger for the CoT process."