The author proposes a three-stage multitask distillation framework to address the challenges of positional information loss and low generalization in teaching student MLPs on graphs.
The author proposes a novel graph learning framework, GCN-SA, that utilizes self-attention mechanisms to enhance structure learning and node representation. This approach improves the model's ability to capture long-range dependencies in graphs with varying levels of homophily.
The author proposes the Cluster Information Transfer (CIT) mechanism to enhance the generalization ability of Graph Neural Networks (GNNs) by learning invariant representations, addressing structure shifts in test graphs.
The authors investigate the expressivity of different versions of graph neural networks using modal and guarded fragments of first-order logic with counting. They aim to determine if 2-GNNs are more powerful than 1-GNNs.
The author introduces GraphControl to address the "transferability-specificity dilemma" in graph transfer learning by incorporating downstream-specific information into pre-trained models, resulting in significant performance gains.
The author argues that incorporating physical inductive biases, such as second-order motion laws, into Graph Neural Networks improves model generalization by learning continuous trajectories between system states.
SHERD method enhances robustness and performance in graph neural networks by identifying vulnerable nodes.
Innovative deployment module, GraphControl, addresses the "transferability-specificity dilemma" in graph transfer learning by incorporating downstream-specific information for improved performance.
Proposing SEGNO to enhance GNNs with physical inductive biases for improved generalization in modeling complex physical systems.
Graph Neural Networks (GNNs) revolutionize graph analysis by aggregating information from graph structures, enabling various tasks and applications.