Core Concepts
Graph neural network outputs converge to constant functions on random graphs, providing an upper bound on their expressiveness.
Abstract
Introduction
Graph neural networks (GNNs) are widely used for various learning tasks on graphs.
Recent focus on graph transformer architectures for graph representation learning.
Related Work
Studies on the expressive power of MPNNs and graph distinguishability.
Recent research on uniform expressiveness in GNNs.
Preliminaries
Definitions of featured random graphs and convergence.
Introduction to MPNNs and graph transformers.
Model Architectures via Term Languages
Definition of a term language for GNNs capturing various architectures.
Convergence Theorems
Theorems on convergence for Erd˝os-R´enyi and Stochastic Block Model distributions.
Corollary on the asymptotic convergence of class probabilities.
Experimental Evaluation
Empirical verification of convergence on synthetic experiments.
Impact of different weighted mean aggregations and graph distributions on convergence.
Discussion
Extension of convergence phenomena to wider classes of distributions and architectures.
Social Impact
Potential societal consequences of advancing machine learning.
Stats
Graph neural network outputs converge to constant functions on random graphs.
Probabilistic classifiers converge to constant outputs as graph size increases.
Empirical validation of convergence across different model initializations.
Quotes
"Graph neural network outputs converge to constant functions on random graphs."
"Probabilistic classifiers converge to constant outputs as graph size increases."