Core Concepts
Proposing a training-free self-improving framework for zero-shot NER with LLMs significantly improves performance.
Abstract
The article introduces a self-improving framework for zero-shot Named Entity Recognition (NER) using Large Language Models (LLMs). The framework leverages an unlabeled corpus to enhance the self-learning ability of LLMs. By utilizing self-consistency, reliable annotations are selected to form a self-annotated dataset, leading to improved performance on four benchmarks. Experimental analysis reveals that increasing the size of the unlabeled corpus or iterations of self-improvement do not guarantee further improvement, emphasizing the importance of advanced strategies for reliable annotation selection.
Stats
Experiments show substantial performance improvements achieved by the framework.
Increasing the size of unlabeled corpus or iterations does not guarantee further improvement.
Performance might be boosted via more advanced strategies for reliable annotation selection.