A deep dynamic residual graph convolutional network (DynaResGCN) model is designed to effectively detect overlapping communities in graphs by incorporating residual connections, dynamic dilated aggregation, and an encoder-decoder framework.
CORE, a novel data augmentation framework, leverages the Information Bottleneck principle to eliminate noisy and spurious edges while recovering missing edges in graphs, thereby enhancing the generalizability of link prediction models.
A universal prompt-based tuning method called Graph Prompt Feature (GPF) that can be applied to pre-trained GNN models under any pre-training strategy, achieving equivalent performance to specialized prompting functions.
The Graph Spectral Token is a novel approach to directly encode graph spectral information into the transformer architecture, capturing the global structure of the graph and enhancing the expressive power of graph transformers.
Deleting edges can simultaneously address the problems of over-squashing and over-smoothing in graph neural networks by optimizing the spectral gap of the graph.
Message passing neural networks (MPNNs) can generalize effectively to unseen sparse, noisy graphs sampled from a mixture of graphons, as long as the graphs are sufficiently large.
The VC dimension of graph neural networks with Pfaffian activation functions, such as tanh, sigmoid, and arctangent, is bounded with respect to the network hyperparameters (number of parameters, layers, nodes, feature dimension) as well as the number of colors resulting from the 1-WL test on the graph domain.
A lightweight Graph Inception Diffusion Network (GIDN) model that generalizes graph diffusion in different feature spaces and uses the inception module to avoid the large computational cost of complex network structures, achieving high-efficiency link prediction.
The proposed Continuous Spiking Graph Neural Networks (COS-GNN) framework integrates spiking neural networks (SNNs) and continuous graph neural networks (CGNNs) to achieve energy-efficient and effective graph learning, while addressing the information loss issue in SNNs through high-order spike representation.
Redundancy in the information flow and computation of graph neural networks can lead to oversquashing, limiting their expressivity and accuracy. The proposed DAG-MLP approach systematically eliminates redundant information by using neighborhood trees and exploits computational redundancy through merging of isomorphic subtrees, achieving higher expressivity and accuracy compared to standard graph neural networks.