核心概念
The proposed Continuous Spiking Graph Neural Networks (COS-GNN) framework integrates spiking neural networks (SNNs) and continuous graph neural networks (CGNNs) to achieve energy-efficient and effective graph learning, while addressing the information loss issue in SNNs through high-order spike representation.
摘要
The content discusses the development of Continuous Spiking Graph Neural Networks (COS-GNN), a framework that combines the advantages of spiking neural networks (SNNs) and continuous graph neural networks (CGNNs) for efficient and effective graph learning.
Key highlights:
- Existing CGNNs are computationally expensive, making them challenging to deploy on battery-powered devices. The authors aim to address this by incorporating SNNs, which are energy-efficient.
- The authors propose two variants of COS-GNN: COS-GNN-1st, which uses first-order SNNs and second-order CGNNs, and COS-GNN-2nd, which employs second-order SNNs and second-order CGNNs.
- The second-order COS-GNN-2nd is introduced to mitigate the information loss issue associated with first-order SNNs, by deriving a second-order spike representation and backpropagation.
- Theoretical analysis shows that COS-GNN effectively mitigates the exploding and vanishing gradient problems, ensuring the stability of the model.
- Extensive experiments on various graph learning tasks demonstrate the effectiveness of COS-GNN compared to state-of-the-art methods, with COS-GNN-2nd outperforming other variants.
- The authors also provide an energy efficiency analysis, showing that COS-GNN significantly reduces the number of operations required for node prediction compared to baseline methods.
统计
COS-GNN-1st and COS-GNN-2nd have 2.33K, 1.94K, 1.02K and 3.97K, 3.42K, 2.78K operations on Cora, Citeseer and Pubmed datasets, respectively.
Traditional GNN models like GCN, GAT and SGC have 67.77K, 79.54K, 414.16K; 308.94K, 349.91K, 1.53M; and 10.03K, 22.22K, 1.50K operations on the same datasets, respectively.