Centrala begrepp
LLMs can effectively select important unlabelled instructions for annotation, improving instruction tuning benchmarks.
Sammanfattning
SELECTLLM introduces a framework leveraging LLM capabilities to select unlabeled instructions efficiently. It outperforms other methods in instruction tuning benchmarks, showing consistency across datasets and better cross-dataset generalization. The framework allows for customization and enhances model performance. Experiments demonstrate SELECTLLM's effectiveness in selecting high-quality data for training language models.
Statistik
SELECTLLM consistently outperforms other methods in the Dolly dataset, with an average improvement of 2.6% in Rouge Score and 3% in Cosine Similarity across all sample sizes.
SELECTLLM shows strength particularly at the 1k and 3k sample sizes in the Cleaned Alpaca dataset, outperforming others on the cosine similarity metric.
Models trained on Dolly samples using various sampling techniques generalize well to Cleaned Alpaca data, with SELECTLLM showing a 10% performance improvement on the test set.