Core Concepts
Integrating Large Language Models (LLMs) and Graph Structure Learning Models (GSLMs) significantly improves the robustness and accuracy of graph representation learning, especially in noisy or incomplete graph scenarios.
Stats
LangGSL achieves an average improvement of 3.1% compared to the second-best performance across all datasets in the Topology Refinement scenario.
On the Pubmed dataset, LangGSL shows a near 15% improvement over the vanilla GCN and even greater margins compared to other methods.
In the Topology Inference scenario, LangGSL (LM) achieves an improvement of 16.37% on Pubmed and 17.16% on ogbn-arxiv over the second-best method.
LangGSL (GSLM) demonstrates further performance boosts, with improvements of 16.21% on Pubmed and 4.02% on ogbn-arxiv compared to the next best result.