Core Concepts
Utilizing retrieval-augmented large language models (LLMs) enhances event temporal relation extraction by optimizing prompt templates and verbalizers.
Abstract
The content discusses the challenges of event temporal relation extraction due to the ambiguity of TempRel. It introduces a novel approach using retrieval-augmented LLMs to improve prompt templates and verbalizers. The method focuses on selecting appropriate modifiers for trigger words and establishing mappings from vocabulary space to label space. Experimental evaluations show significant improvements in performance across three datasets.
Structure:
Introduction to Event Temporal Relation Extraction Challenges
Ambiguity of TempRel complicates extraction.
Novel Approach with Retrieval-Augmented LLMs
Leveraging diverse capabilities of LLMs for template design.
Proposed Method: RETR Model Overview
Rough selection stage and fine-tuning selection stage explained.
Experiments and Results Analysis
Performance metrics on TB-Dense, TDD-Man, and TDD-Auto datasets.
Comparative Analysis against Baselines without Retrieval
Effectiveness of manually designed vs. auxiliary LLM-designed templates.
Correlation Analysis: PLM Selection, Strategy Selection, Tuning Mode
Impact of different pre-trained language models, loss functions, tuning modes.
Case Study Illustration: Utilization of PLM knowledge for TempRel prediction.
Stats
"Our method capitalizes on the diverse capabilities of various LLMs to generate a wide array of ideas for template and verbalizer design."
"Experimental results show that our method consistently achieves good performances on three widely recognized datasets."
Quotes
"Our contributions can be summarized as follows:"
"We are the first to integrate Retrieval-Augmented Generation (RAG) with the prompt-based learning paradigm."