The article introduces a self-improving framework for zero-shot Named Entity Recognition (NER) using Large Language Models (LLMs). The framework leverages an unlabeled corpus to enhance the self-learning ability of LLMs. By utilizing self-consistency, reliable annotations are selected to form a self-annotated dataset, leading to improved performance on four benchmarks. Experimental analysis reveals that increasing the size of the unlabeled corpus or iterations of self-improvement do not guarantee further improvement, emphasizing the importance of advanced strategies for reliable annotation selection.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Tingyu Xie,Q... at arxiv.org 03-19-2024
https://arxiv.org/pdf/2311.08921.pdfDeeper Inquiries