The article addresses the challenge of maintaining accurate representations as graphs evolve over time, which can lead to continuous performance degradation of GNNs. It first theoretically establishes a lower bound, proving that under mild conditions, representation distortion inevitably occurs over time.
To estimate the temporal distortion without human annotation after deployment, the authors analyze the representation distortion from an information theory perspective and attribute it primarily to inaccurate feature extraction during evolution. Consequently, they introduce SMART, a straightforward and effective baseline enhanced by an adaptive feature extractor through self-supervised graph reconstruction.
In synthetic random graphs, the authors further refine the former lower bound to show the inevitable distortion over time and empirically observe that SMART achieves good estimation performance. Moreover, SMART consistently shows outstanding generalization estimation on four real-world evolving graphs. The ablation studies underscore the necessity of graph reconstruction, demonstrating that removing it can significantly degrade the estimation performance.
A otro idioma
del contenido fuente
arxiv.org
Ideas clave extraídas de
by Bin Lu,Tingy... a las arxiv.org 04-09-2024
https://arxiv.org/pdf/2404.04969.pdfConsultas más profundas