Core Concepts
The authors propose a novel two-dimensional (2-D) graph convolution paradigm that unifies and generalizes existing spectral graph convolution approaches, enabling error-free construction of arbitrary target outputs.
Abstract
The paper focuses on addressing critical issues in existing spectral graph convolution paradigms used in spectral Graph Neural Networks (GNNs). The authors first analyze the popular convolution paradigms (Paradigm I, II, and III) and prove that they cannot construct arbitrary target outputs under certain conditions on the input graph signals.
To address this, the authors rethink the spectral graph convolution from a 2-D signal convolution perspective and propose a new 2-D graph convolution paradigm. They prove that the 2-D graph convolution unifies the existing paradigms as special cases, and is always capable of constructing the target output with 0 error. Furthermore, the authors show that the parameter number in 2-D graph convolution is irreducible for achieving 0 construction error.
Based on the 2-D graph convolution, the authors propose ChebNet2D, an efficient and effective spectral GNN implementation using Chebyshev polynomial approximation. Extensive experiments on 18 benchmark datasets demonstrate the superior performance and efficiency of ChebNet2D compared to state-of-the-art GNN methods.
Stats
The paper does not contain any explicit numerical data or statistics. The key insights are derived through theoretical analysis and proofs.
Quotes
The paper does not contain any striking quotes that directly support the key logics.