Existing Graph Neural Networks (GNNs) struggle with multi-label node classification, even with abundant data or complex architectures, because they fail to effectively incorporate label and positional information; GNN-MultiFix addresses this by integrating feature, label, and positional information to improve performance.
T-GAE, a novel transferable graph autoencoder framework, leverages the transferability and robustness of GNNs to achieve efficient and accurate network alignment on large, unseen graphs without retraining.
This paper introduces HoGA, a novel graph attention module that enhances existing single-hop attention models by incorporating long-distance relationships through efficient sampling of the k-hop neighborhood, leading to significant accuracy improvements in node classification tasks.
This paper introduces MaGNet, a novel graph neural network framework that integrates local and global information to improve performance in graph-focused tasks while addressing limitations of traditional GNNs like over-smoothing and lack of interpretability.
본 논문에서는 그래프 구조 데이터 처리에서 우수한 성능을 보이는 그래프 신경망(GNN)의 두 가지 주요 과제, 즉 토폴로지와 속성 간의 간섭으로 인한 노드 표현 왜곡과 대부분의 GNN이 저주파 필터링에만 집중하여 그래프 신호의 중요한 고주파 정보를 간과하는 문제를 해결하기 위해 이중 주파수 필터링 자기 인식 그래프 신경망(DFGNN)을 제안합니다.
This paper proposes DFGNN, a novel graph neural network architecture that leverages dual-frequency filtering and self-aware mechanisms to improve performance on both homophilic and heterophilic graphs for semi-supervised node classification tasks.
레이블 노이즈에 취약한 기존 그래프 신경망(GNN)의 한계를 극복하기 위해, LEGNN은 레이블 앙상블과 부분 레이블 학습 전략을 활용하여 노이즈에 강건하며 효율적인 GNN 학습 방법을 제시한다.
LEGNN is a novel method for training robust Graph Neural Networks (GNNs) that are resistant to label noise, achieving this through a label ensemble approach and reducing computational complexity compared to traditional reliable labeling methods.
Contrary to common belief, graph convolutional networks (GCNs) can be designed to avoid oversmoothing, a phenomenon hindering the performance of deep GCNs, by simply initializing the network weights with higher variance, pushing them into a "chaotic" and thus non-oversmoothing phase.
ScaleNet, a novel graph neural network architecture, achieves state-of-the-art node classification accuracy in both homophilic and heterophilic directed graphs by leveraging the concept of scale invariance and flexibly combining multi-scale graph representations.