The paper presents several key results:
For GNNs with eventually constant activation functions (e.g. truncated ReLU), the authors show that the set of possible activation values at each layer is finite and computable. They use this to:
For GNNs with unbounded activation functions (e.g. standard ReLU), the authors show:
The authors show that their results hold both for directed graphs and for the standard case of undirected graphs used in GNN literature.
For GNNs with unbounded activations and only local aggregation (no global readout), the authors show the satisfiability problem is decidable.
The key technical tools used are logical characterizations of GNN expressiveness, and analysis of the spectra (set of possible activation values) of GNNs.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Michael Bene... at arxiv.org 04-30-2024
https://arxiv.org/pdf/2404.18151.pdfDeeper Inquiries