The paper explores methods to incorporate structural information from knowledge graphs (KGs) into large language models (LLMs) to improve their performance on knowledge graph completion (KGC) tasks.
The authors first discuss extending existing LLM paradigms like in-context learning and instruction tuning to incorporate KG structural information through additional textual prompts. They then propose a novel Knowledge Prefix Adapter (KoPA) approach that leverages pre-trained structural embeddings to capture the intricate entities and relations within KGs. KoPA projects these structural embeddings into the textual space and uses them as virtual knowledge tokens positioned as a prefix to the input prompt.
The authors conduct comprehensive experiments on three public KGC benchmarks and demonstrate that the introduction of cross-modal structural information significantly boosts the factual knowledge reasoning ability of LLMs compared to existing approaches. They also analyze the transferability and knowledge retention of the proposed methods.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Yichi Zhang,... at arxiv.org 04-16-2024
https://arxiv.org/pdf/2310.06671.pdfDeeper Inquiries