核心概念
The author employs reinforcement learning strategies, specifically the REINFORCE algorithm, to address challenges in multi-hop Knowledge Graph Reasoning due to incomplete data. By refining reward shaping through pre-trained embeddings and Prompt Learning, the study aims to improve accuracy and robustness in knowledge inference.
摘要
The content discusses enhancing multi-hop Knowledge Graph Reasoning through Reward Shaping Techniques. It explores challenges posed by incomplete Knowledge Graphs and proposes methodologies like pre-trained BERT embeddings and Prompt Learning for refining reward shaping. The study aims to set new standards for future research in computational knowledge representation by improving precision in multi-hop reasoning.
統計資料
False negative search results receive the same reward as true negatives.
Hits@1: 0.578, Hits@3: 0.854, Hits@5: 0.911, Hits@10: 0.871, MRR: 0.913 (Sparse KG Policy Gradient)
Hits@1: 0.625, Hits@3: 0.974, Hits@5: 0.845, Hits@10: 0.969, MRR: 0.734 (Rich KG Policy Gradient)
Sparse KG Policy Gradient + Rich Reward Shaping - Hits@1: 0.850, Hits@3: 0.910, Hits@5: 0.992, Hits@10: 0.995, MRR: 0.930
BERT Contextualization RS trained on Rich KG - Hits@1: 0.810, Hits@3: 0.877, Hits@5: 0.865, Hits@10: 0.785, MRR: 0.944
Prompt Learning based RS trained on Rich KG - Hits@1: 0.860, Hits@3: 0.937, Hits@5: 0.997,Hits @10 : .992,MRR : .916
引述
"The quintessence of this research elucidates the employment of reinforcement learning strategies to navigate the intricacies inherent in multi-hop KG-R."
"By partitioning the Unified Medical Language System benchmark dataset into rich and sparse subsets..."
"Our work contributes a novel perspective to the discourse on KG reasoning..."