Core Concepts
Large Language Models can effectively comprehend graph information through soft prompts, as demonstrated by the GraphPrompter framework.
Abstract
The integration of Large Language Models (LLMs) with graph neural networks (GNNs) presents unique challenges due to modalities mismatch. To address this, the authors introduce GraphPrompter, aligning graph information with LLMs via soft prompts. The framework combines GNNs for encoding complex graph structures and LLMs for processing textual data. Experiments on benchmark datasets show the effectiveness of GraphPrompter in node classification and link prediction tasks. Notably, GraphPrompter outperforms traditional methods like zero-shot learning and fine-tuning across various benchmarks. The study highlights the potential of leveraging LLMs for interpreting graph structures through prompt tuning strategies.
Stats
Code is available at https://github.com/franciscoliu/graphprompter.
Node Classification Accuracy: Cora - 80.26%, Citeseer - 73.61%, Pubmed - 94.80%, Ogbn-arxiv - 75.61%, Ogbn-products - 79.54%
Link Prediction Accuracy: Cora - 90.10%, Citeseer - 91.67%, Pubmed - 86.49%, Ogbn-arxiv - 73.21%, Ogbn-products - 69.55%
Quotes
"Graph plays an important role in representing complex relationships in real-world applications."
"GraphPrompter unveils the substantial capabilities of LLMs as predictors in graph-related tasks."
"Our main contributions are investigating whether LLMs can understand graph learning tasks via soft prompting."