Core Concepts
G-Retriever enhances graph understanding through a flexible question-answering framework for textual graphs.
Abstract
G-Retriever introduces a novel approach to question answering on textual graphs, combining GNNs, LLMs, and RAG. It addresses the limitations of existing methods by focusing on real-world applications like scene graph understanding and knowledge graph reasoning. The architecture involves indexing, retrieval, subgraph construction, and answer generation steps. Experimental results show superior performance over baselines in various datasets. Efficiency is highlighted by significant reductions in tokens and nodes post-retrieval. Hallucination mitigation is achieved with a 54% reduction compared to baselines.
Stats
Empirical evaluations show method outperforms baselines on textual graph tasks.
G-Retriever reduces hallucinations by 54% compared to baseline.
Significant reductions in average number of tokens and nodes post-retrieval.
Quotes
"G-Retriever surpasses all inference-only baselines."
"G-Retriever outperforms traditional prompt tuning across all datasets."
"The combination of our method with LoRA achieves the best performance."