Core Concepts
Pretrained language models benefit from complexity-based prompt selection for improved few-shot learning performance.
Abstract
PLMs excel in few-shot learning with proper examples.
Selecting high-quality examples is crucial for PLMs' effectiveness.
Complexity-based prompt selection enhances PLMs' performance.
CP retrieval shows significant accuracy improvements across various NLP tasks.
The method aligns example complexity with test sentences for better performance.
Results demonstrate state-of-the-art performance in NER and other tasks.
CP retrieval outperforms traditional prompt selection methods.
The approach is flexible and task-agnostic.
Weighted complexity scores optimize example selection.
Limitations include focus on sequence tagging tasks and English language.
Stats
PLMs achieve a 5% absolute improvement in F1 score on the CoNLL2003 dataset for GPT-4.
GPT-j-6B sees gains of up to 28.85 points (F1/Acc.).
Quotes
"We propose a complexity-based prompt selection approach for sequence tagging tasks."
"Our results demonstrate that our approach extracts greater performance from PLMs."