This work introduces "In-context QD", a framework that utilizes the pattern-matching and generative capabilities of pre-trained Large Language Models (LLMs) to generate new solutions for Quality-Diversity (QD) optimization.
The key insights are:
QD archives provide a diverse set of high-quality examples that can be effectively leveraged by LLMs through in-context learning to generate novel and improved solutions.
The prompt template, context structure, and query strategy are critical design choices that enable LLMs to extract relevant patterns from the QD archive and generate solutions that improve both quality and diversity.
Experiments across a range of QD benchmarks, including BBO functions, redundant robotic arm control, and hexapod locomotion, demonstrate that In-context QD outperforms conventional QD baselines like MAP-Elites, especially in finding regions of high fitness.
Ablation studies highlight the importance of including both fitness and feature information in the prompt template, as well as the benefits of structuring the context to provide helpful heuristics for the LLM.
Overall, this work showcases the potential of using LLMs as efficient in-context generators for QD optimization, opening up new avenues for leveraging large-scale generative models in open-ended search and discovery.
Para Outro Idioma
do conteúdo original
arxiv.org
Principais Insights Extraídos De
by Bryan Lim,Ma... às arxiv.org 04-25-2024
https://arxiv.org/pdf/2404.15794.pdfPerguntas Mais Profundas