This paper introduces NegativePrompt, a novel approach that leverages negative emotional stimuli to enhance the performance of large language models (LLMs). The authors draw inspiration from prominent psychological theories, including Cognitive Dissonance Theory, Social Comparison Theory, and Stress and Coping Theory, to design a set of 10 negative emotional prompts.
The researchers conduct comprehensive experiments on 24 Instruction Induction tasks and 21 curated BIG-Bench tasks, evaluating the effectiveness of NegativePrompt across five prominent LLMs: Flan-T5-Large, Vicuna, Llama 2, ChatGPT, and GPT-4. The results reveal that NegativePrompt significantly improves task performance, with relative enhancements of 12.89% in Instruction Induction and 46.25% in BIG-Bench tasks.
Further analysis explores the underlying mechanisms driving the effectiveness of NegativePrompt, including its impact on the models' comprehension of task instructions, expression of negative emotions, and ability to handle challenges. The authors also investigate the cumulative effect of deploying multiple negative emotional stimuli and the individual efficacy of each stimulus.
Additionally, the researchers utilize the TruthfulQA benchmark to automatically evaluate the truthfulness and informativeness of the content generated by the LLMs when using NegativePrompt. The findings demonstrate that NegativePrompt substantially enhances the authenticity and informativeness of the models' outputs.
Overall, this study contributes significantly to the understanding of the interaction between LLMs and emotion, and showcases the practical efficacy of NegativePrompt as an emotion-driven method for enhancing LLM performance in real-world applications.
To Another Language
from source content
arxiv.org
Viktige innsikter hentet fra
by Xu Wang,Chen... klokken arxiv.org 05-07-2024
https://arxiv.org/pdf/2405.02814.pdfDypere Spørsmål