The paper analyzes the statistical generalization of Graph Neural Networks (GNNs) from a manifold perspective. It considers graphs sampled from manifolds and proves that GNNs can effectively generalize to unseen data from the manifolds when the number of sampled points is large enough.
The key insights are:
The paper leverages manifold theory to analyze the statistical generalization gap of GNNs operating on graphs constructed on sampled points from manifolds. It studies the generalization gaps of GNNs on both node-level and graph-level tasks.
The paper shows that the generalization gaps decrease with the number of nodes in the training graphs, which guarantees the generalization of GNNs to unseen points over manifolds. This holds for both Gaussian kernel based graphs and ε-graphs.
The generalization gap scales with the size of the GNN architecture, increasing polynomially with the number of features and exponentially with the number of layers.
The theoretical results are validated on multiple real-world datasets, demonstrating the linear decay of the generalization gap with the logarithm of the number of nodes.
The paper also analyzes the generalization of GNNs on graph-level tasks, showing that a single graph sampled from the underlying manifold with large enough sampled points can provide an effective approximation to classify the manifold itself.
Overall, the paper provides a unified theoretical framework to understand the generalization capabilities of GNNs from a manifold perspective, with practical implications for the design of large-scale GNN architectures.
На другой язык
из исходного контента
arxiv.org
Дополнительные вопросы