The paper proposes the Repeating-Local-Global History Network (RLGNet) to address the challenge of temporal knowledge graph (TKG) reasoning, which aims to predict future facts based on historical information.
The key highlights and insights are:
The model consists of three encoders:
The model utilizes time vectors to encode time and frequency information, which are then integrated into the three encoders.
The scoring decoder, which is used in all three encoders, processes the input vectors using a convolutional neural network and outputs the predicted entity scores.
Extensive experiments on six benchmark datasets show that RLGNet generally outperforms existing TKG reasoning models in both multi-step and single-step reasoning tasks.
Ablation studies demonstrate the importance of the three encoders, with the repeating history encoder being particularly beneficial across various tasks.
Hyperparameter analysis reveals the different contributions of global and local historical information in multi-step and single-step reasoning tasks, and the optimal settings for the number of GCN layers and the number of top candidate entities.
إلى لغة أخرى
من محتوى المصدر
arxiv.org
الرؤى الأساسية المستخلصة من
by Ao Lv,Yongzh... في arxiv.org 04-02-2024
https://arxiv.org/pdf/2404.00586.pdfاستفسارات أعمق