Core Concepts
The author establishes a correspondence between graph neural networks and arithmetic circuits, highlighting the computational abilities of GNNs. By studying GNNs as devices operating with real numbers, they shed light on their computational model.
Abstract
The paper explores the computational power of neural networks, focusing on graph neural networks (GNNs) and their relation to arithmetic circuits over real numbers. It discusses the expressive power of GNNs in computing functions from real numbers to labeled graphs. The study contrasts traditional feed-forward neural networks with Boolean threshold circuits and delves into the logical expressiveness of GNNs. The research extends previous studies by considering GNNs that operate on real numbers rather than just Boolean inputs or outputs. By characterizing the computational power of GNNs using arithmetic circuits, the paper provides insights into their capabilities and limitations. Additionally, it examines scaling and complexity in neural networks, challenging the common narrative that larger networks can solve more complex tasks. The study emphasizes the need for new network architectures beyond constant-depth circuits over reals for functions not computable by existing models.
Stats
For a wide range of activation functions, training complexity is essentially complete for ∃R.
Maass showed that when restricting networks to Boolean inputs, a language can be decided by polynomial-size FNNs if it belongs to TC0.
Grohe connected GNNs to Boolean circuits through FO-fragment GFO+C.
Merrill studied transformers as uniform constant-depth threshold circuits.
Quotes
"Training fully connected neural networks is ∃R-complete." - Bertschinger et al., 2022
"The logical expressiveness of graph neural networks was studied by Barcel´o et al." - Barcel´o et al., 2020