ConstitutionalExperts presents a novel approach to prompt optimization by focusing on constitutional principles. The method incrementally enhances prompts by editing individual principles rather than optimizing the prompt as a whole. By training unique prompts for different semantic regions of the data and using a mixture-of-experts architecture, ConstitutionalExperts achieves superior performance compared to other state-of-the-art techniques. The evaluation across six benchmark datasets demonstrates the effectiveness of ConstitutionalExperts in outperforming existing methods by 10.9% (F1 score). The approach also shows that incorporating MoE improves all techniques, indicating its broad applicability.
The method involves clustering the training data, training an Expert for each cluster, and routing inputs at inference time based on similarity to cluster centroids. By iteratively updating prompts through mutations and evaluating candidates on validation sets, ConstitutionalExperts refines the prompts to achieve better performance. The structured nature of the prompts allows for targeted changes without rewriting the entire prompt, leading to improved interpretability and controllability.
Comparisons with standard prompting techniques like zero-shot, few-shot, chain of thought, and LoRA tuning reveal that ConstitutionalExperts excels in performance across various datasets. The inclusion of MoE further enhances the results, showcasing the versatility of this approach. Future work could explore applying this method to different NLP tasks and investigating alternative clustering methods for improved routing efficiency.
他の言語に翻訳
原文コンテンツから
arxiv.org
深掘り質問