Scalable Message Passing Neural Networks through Distributed Training and Sampling
A domain-decomposition-based distributed training and inference approach for message-passing neural networks (MPNN) that enables scaling to large graphs with up to 100,000 nodes through a combination of multi-GPU parallelization and node/edge sampling techniques.