Even minor changes to prompts can significantly alter the predictions of large language models, with some variations leading to substantial performance degradation.
Allowing large language models to rephrase and expand on questions before responding can significantly improve their performance across a wide range of reasoning tasks.
Across different prompting techniques and language models, non-native language prompts outperform native language prompts in eliciting desired outputs for a variety of social media and news-related NLP tasks.
A framework for automated prompt optimization and adaptation that leverages contrastive learning to enhance prompt effectiveness across different model versions, families, and languages.