Core Concepts
Integrating Spiking Neural Networks (SNNs) with Graph Transformers enables efficient computation and all-pair node interactions on large-scale graphs.
Abstract
The article introduces SpikeGraphormer, a novel approach that integrates SNNs with Graph Transformers to address the computational complexity of all-pair node interactions on large-scale graphs. The proposed Spiking Graph Attention (SGA) module replaces matrix multiplication with sparse addition and mask operations, reducing complexity from quadratic to linear. The Dual-branch architecture combines a sparse GNN branch with the SGA-driven Graph Transformer branch for improved performance in various datasets.
Introduction to Graph Transformers and limitations of GNNs.
Proposal of integrating SNNs with Graph Transformers for efficient computation.
Description of the Spiking Graph Attention (SGA) module and its benefits.
Design of the Dual-branch architecture, SpikeGraphormer, for enhanced performance.
Key contributions include novel insights, linear complexity improvement, and comprehensive experiment validation.
Stats
"Our work is the first attempt to introduce SNNs into Graph Transformers."
"GPU memory cost is 10 ∼20 × lower than vanilla self-attention."
Quotes
"Our work is the first attempt to introduce SNNs into Graph Transformers."
"GPU memory cost is 10 ∼20 × lower than vanilla self-attention."