Conceptos Básicos
Large language models can enhance graph structure learning by denoising noisy connections and uncovering implicit node-wise dependencies.
Resumen
GraphEdit introduces a novel approach leveraging large language models (LLMs) to refine graph structures. By enhancing the reasoning capabilities of LLMs through instruction-tuning over graph structures, GraphEdit aims to overcome challenges associated with explicit graph structural information. The model effectively denoises noisy connections and identifies node-wise dependencies from a global perspective, providing a comprehensive understanding of the graph structure. Extensive experiments on benchmark datasets demonstrate the effectiveness and robustness of GraphEdit across various settings. The model implementation is available at https://github.com/HKUDS/GraphEdit.
Estadísticas
Graph Neural Networks (GNNs) excel in learning node-level representations by aggregating information from neighboring nodes.
Real-world graph domains face challenges like data noise and sparsity, impacting the reliability of explicit graph structures.
PubMed dataset consists of academic papers categorized into three distinct categories: Diabetes Mellitus Type 1, Diabetes Mellitus Type 2, and Diabetes Mellitus, Experimental.
Cora dataset comprises papers classified into seven computer science domains.
Citeseer dataset focuses on medical literature categorized into three distinct categories.
Citas
"Graph Neural Networks have captured significant attention due to their remarkable capacity to model relationships within graph-structured data."
"GraphEdit leverages large language models to learn complex node relationships in graph-structured data."
"Our approach not only effectively denoises noisy connections but also identifies node-wise dependencies from a global perspective."