The study explores the reverse process of message passing in Graph Neural Networks (GNNs) to sharpen node representations and distinguish neighboring nodes with different labels. By applying the reverse process to three variants of GNNs, significant improvements in prediction performance are observed on heterophilic graph data. The study shows that the reverse mechanism can prevent over-smoothing over multiple layers, enhancing the ability to capture long-range dependencies crucial for performance on heterophilic datasets. Various methods are proposed to develop a reverse diffusion function for different backbone models like GRAND, GCN, and GAT. Experimental results demonstrate that the reverse process enhances prediction accuracy compared to forward-only models, particularly on heterophilic datasets. Additionally, the study investigates low label rate datasets and finds that the reverse process is effective even in scenarios with limited training labels.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by MoonJeong Pa... at arxiv.org 03-19-2024
https://arxiv.org/pdf/2403.10543.pdfDeeper Inquiries