Data-Efficient Fine-Tuning of Pre-Trained Language Models via Unsupervised Core-Set Selection
A data-efficient fine-tuning framework, DEFT-UCS, leverages unsupervised core-set selection to identify a smaller, representative dataset that reduces the amount of data needed to fine-tune pre-trained language models for downstream tasks.