Core Concepts
Geometric Graph Neural Networks leverage physical symmetries to model 3D atomic systems accurately.
Abstract
Geometric Graph Neural Networks (GNNs) are specialized architectures that capture physical symmetries in 3D atomic systems. They learn latent representations of atoms through message passing, respecting Euclidean transformations. The pipeline involves input preparation, embedding block initialization, and interaction blocks for learning geometric and relational features. Different approaches like cutoff graphs and long-range connections are used to construct the initial geometric graph. Geometric GNNs categorize into invariant, equivariant in Cartesian basis, equivariant in spherical basis, and unconstrained models. The output block makes task-specific predictions at node or graph levels.
Stats
Recent advances in computational modeling of atomic systems.
Geometric attributes transform according to physical symmetries.
Four families of Geometric GNN architectures: Invariant, Equivariant in Cartesian basis, Equivariant in spherical basis, Unconstrained.
Various strategies for constructing the initial geometric graph.
Embedding block initializes learnable atom representations.
Interaction blocks update scalar and vector features through message passing.
Output block makes task-specific predictions at node or graph levels.