Pretrained language models benefit from complexity-based prompt selection for improved few-shot learning performance.
Pretrained language models benefit from complexity-based prompt selection for improved few-shot performance.
The author proposes a complexity-based prompt selection approach for sequence tagging tasks to improve few-shot learning capabilities of pretrained language models. By aligning the syntactico-semantic complexity of examples with test sentences, significant performance gains are achieved.