The Observation-Driven Agent (ODA) framework effectively integrates the capabilities of large language models (LLMs) and knowledge graphs (KGs) to enhance reasoning and question-answering performance on KG-centric tasks.
A method for learning universal and transferable graph representations, ULTRA, that enables zero-shot inference on any knowledge graph with arbitrary entity and relation vocabularies.
Graph Neural Networks (GNNs) with tail entity scoring have achieved state-of-the-art performance on knowledge graph reasoning, but the theoretical understanding of the types of logical rules they can learn is lacking. This paper proposes a unified framework called QL-GNN to analyze the expressivity of these GNNs, formally demonstrating their ability to learn a specific class of rule structures. It further introduces EL-GNN, a novel GNN design that can learn rule structures beyond the capacity of QL-GNN.
ULTRAQUERY, a single model that can zero-shot answer complex logical queries on any knowledge graph, even with new entities and relations, by using inductive relation projections and non-parametric logical operators.
The core message of this paper is to propose a novel model called 'Logic-Query-of-Thoughts' (LGOT) that combines the strengths of Large Language Models (LLMs) and knowledge graph reasoning to effectively answer complex logic queries.
A novel decoupled approach, Language-guided Abstract Reasoning over Knowledge graphs (LARK), that formulates complex KG reasoning as a combination of contextual KG search and logical query reasoning, to leverage the strengths of graph extraction algorithms and large language models (LLM), respectively.
Query2GMM presents a novel query embedding approach that leverages Gaussian Mixture Models to accurately represent multiple disjoint answer subsets for complex logical queries, enabling effective reasoning over knowledge graphs.