Conceitos Básicos
Pretrained language models benefit from complexity-based prompt selection for improved few-shot learning performance.
Estatísticas
PLMs achieve a 5% absolute improvement in F1 score on the CoNLL2003 dataset for GPT-4.
GPT-j-6B sees gains of up to 28.85 points (F1/Acc.).
Citações
"We propose a complexity-based prompt selection approach for sequence tagging tasks."
"Our results demonstrate that our approach extracts greater performance from PLMs."