Core Concepts
SEED proposes a novel adaptation approach for Large Language Models (LLMs) in code generation scenarios with limited training data, achieving superior performance.
Abstract
SEED introduces an error-driven learning approach to adapt LLMs efficiently for code generation tasks with fewer training samples. It involves error code collection, automatic code revision, model optimization, and iterative adaptation. Experimental results show significant improvements over traditional fine-tuning methods.
Stats
SEED achieves a relative improvement of 27.2%-325.0% in Pass@1 compared to traditional fine-tuning approaches.
The average distance between revised codes and erroneous outputs is significantly lower than between erroneous outputs and dataset samples.
Quotes
"SEED leverages the errors made by LLMs as learning opportunities, using error revision to overcome its own shortcomings."
"Experimental results show that SEED consistently demonstrates strong performance across various LLMs."