The paper proposes a model collaboration framework for relational triple extraction that addresses the limitations of large language models (LLMs) in extracting multiple triples from complex sentences.
The key components of the framework are:
An evaluation model: This model uses a transformer-based architecture to evaluate candidate entity pairs and identify the positive ones. It is trained using a self-labeling approach that generates negative samples from the unlabeled entity pairs in sentences with multiple triples.
Integration with LLMs: The positive entity pairs identified by the evaluation model are provided as prompts to the LLMs, along with the original instructions. This guides the LLMs to consider more entity pairs and assign appropriate relations, improving the recall of the extraction results.
The authors conduct extensive experiments on several complex relational extraction datasets, including NYT, SKE21, and HacRED. The results show that the proposed framework can significantly enhance the recall of LLMs, especially on sentences containing multiple triples, while maintaining high precision. The evaluation model can also be integrated with traditional extraction models to improve their precision.
The paper also includes ablation studies to analyze the contributions of different components of the framework. The results demonstrate the importance of the evaluation-filtering step and the collaboration between the small and large models.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Zepeng Ding,... at arxiv.org 04-16-2024
https://arxiv.org/pdf/2404.09593.pdfDeeper Inquiries