The study explores the efficacy of prompt optimization for large language models, highlighting the importance of local optima and input domain choice. The proposed ZOPO algorithm outperforms existing baselines in both optimization performance and query efficiency.
Leveraging insights from gradient-based model optimization, this work proposes a novel gradient-inspired LLM-based prompt optimizer (GPO) that can effectively and efficiently improve the performance of large language models on various tasks.