LLaGA is a novel framework that seamlessly integrates the capabilities of Large Language Models (LLMs) with graph-structured data, enabling versatile and generalized performance across various graph tasks.
MIRAGE, a novel graph distillation algorithm, exploits the skewed distribution of computation trees in graphs to condense the training data without compromising model performance. MIRAGE is architecture-agnostic and computationally efficient, outperforming state-of-the-art baselines in accuracy, compression, and distillation speed.
The Weisfeiler-Leman (WL) test is commonly used to measure the expressive power of graph neural networks, but this approach has significant limitations and ethical implications that are often overlooked.
Graph transformers have emerged as a promising alternative to graph neural networks, but their theoretical properties and practical capabilities require deeper understanding. This work provides a comprehensive taxonomy of graph transformer architectures, analyzes their theoretical properties, and empirically evaluates their ability to capture graph structure, mitigate over-smoothing, and alleviate over-squashing.
The author proposes a self-guided GSR framework to enhance robustness against adversarial attacks by utilizing a clean sub-graph and addressing technical challenges. The approach outperforms existing methods under various attack scenarios.
The author proposes a novel method to generate hierarchical semantic environments for each graph, enhancing graph invariant learning by considering relationships between environments and maintaining consistency across different hierarchies.