The content discusses the challenges faced by large language models (LLMs) in Time-Sensitive Question Answering (TSQA) tasks, which require effective utilization of temporal contexts and reasoning about time-evolving facts to provide accurate answers.
The key highlights are:
LLMs exhibit limited sensitivity to temporal information within questions and contexts, as well as inadequate temporal reasoning capabilities, hindering their performance on TSQA tasks.
The authors propose a novel framework that addresses these challenges through two main methodologies:
a. Temporal Information-Aware Embedding: This enhances the model's attention to temporal data and adjacent temporal details within questions and contexts.
b. Granular Contrastive Reinforcement Learning: This improves the model's temporal reasoning abilities by incorporating remote and proximal negative answers based on varying temporal distances, and employing a more rational reward function.
Experimental results on four TSQA datasets demonstrate that the proposed framework significantly outperforms existing LLMs, marking a step forward in bridging the performance gap between machine and human temporal understanding and reasoning.
다른 언어로
소스 콘텐츠 기반
arxiv.org
더 깊은 질문