The paper proposes a novel approach called Language-guided Abstract Reasoning over Knowledge graphs (LARK) to address the challenges of complex logical reasoning over knowledge graphs.
The key highlights are:
LARK utilizes the reasoning abilities of large language models (LLMs) by formulating complex KG reasoning as a combination of contextual KG search and logical query reasoning.
It first abstracts out the logical information from both the input query and the KG to focus on the logical formulation, avoid model hallucination, and generalize over different knowledge graphs.
LARK then extracts relevant subgraphs from the abstract KG using the entities and relations present in the logical query, and uses these subgraphs as context prompts for input to LLMs.
To handle complex reasoning queries, LARK exploits the logical nature of the queries and deterministically decomposes the multi-operation query into logically-ordered elementary queries, each containing a single operation. These decomposed logical queries are then converted to prompts and processed through the LLM to generate the final set of answers.
Experiments on standard KG datasets show that LARK outperforms previous state-of-the-art approaches by 35%-84% on 14 FOL query types, with significant performance gain for queries of higher complexity.
The paper also establishes the advantages of chain decomposition and the significant contribution of increasing scale and better design of underlying LLMs to the performance of LARK.
إلى لغة أخرى
من محتوى المصدر
arxiv.org
الرؤى الأساسية المستخلصة من
by Nurendra Cho... في arxiv.org 04-02-2024
https://arxiv.org/pdf/2305.01157.pdfاستفسارات أعمق