The integration of Large Language Models (LLMs) with graph neural networks (GNNs) presents unique challenges due to modalities mismatch. To address this, the authors introduce GraphPrompter, aligning graph information with LLMs via soft prompts. The framework combines GNNs for encoding complex graph structures and LLMs for processing textual data. Experiments on benchmark datasets show the effectiveness of GraphPrompter in node classification and link prediction tasks. Notably, GraphPrompter outperforms traditional methods like zero-shot learning and fine-tuning across various benchmarks. The study highlights the potential of leveraging LLMs for interpreting graph structures through prompt tuning strategies.
Sang ngôn ngữ khác
từ nội dung nguồn
arxiv.org
Thông tin chi tiết chính được chắt lọc từ
by Zheyuan Liu,... lúc arxiv.org 03-19-2024
https://arxiv.org/pdf/2402.10359.pdfYêu cầu sâu hơn