This work introduces "In-context QD", a framework that utilizes the pattern-matching and generative capabilities of pre-trained Large Language Models (LLMs) to generate new solutions for Quality-Diversity (QD) optimization.
The key insights are:
QD archives provide a diverse set of high-quality examples that can be effectively leveraged by LLMs through in-context learning to generate novel and improved solutions.
The prompt template, context structure, and query strategy are critical design choices that enable LLMs to extract relevant patterns from the QD archive and generate solutions that improve both quality and diversity.
Experiments across a range of QD benchmarks, including BBO functions, redundant robotic arm control, and hexapod locomotion, demonstrate that In-context QD outperforms conventional QD baselines like MAP-Elites, especially in finding regions of high fitness.
Ablation studies highlight the importance of including both fitness and feature information in the prompt template, as well as the benefits of structuring the context to provide helpful heuristics for the LLM.
Overall, this work showcases the potential of using LLMs as efficient in-context generators for QD optimization, opening up new avenues for leveraging large-scale generative models in open-ended search and discovery.
다른 언어로
소스 콘텐츠 기반
arxiv.org
더 깊은 질문