Core Concepts
Explicitly modeling the residual error between artificial neural networks (ANNs) and converted spiking neural networks (SNNs) as additive noise can effectively reduce the performance gap under ultra-low-latency conditions.
Abstract
The paper proposes a new approach to improve the performance of low-latency ANN-SNN conversion by explicitly modeling the residual error between the source ANN and the converted SNN. The key insights are:
The authors analyze the sources of conversion errors, including clipping error, quantization error, and residual error. They find that the residual error, which is caused by the inability of integrate-and-fire (IF) neurons to respond to residual membrane potentials beyond the range from resting potential to threshold, is a major factor limiting the performance of low-latency converted SNNs.
To address this issue, the authors introduce a "Noisy Quantized" (NQ) activation function that incorporates additive Gaussian noise to the quantized ANN activation. This noise is designed to compensate for the residual error, effectively reducing the gap between the ANN and the converted SNN.
The authors propose a layer-wise error-compensating strategy to automatically adjust the noise intensity for each activation layer based on the validation set. This allows the noise to be tailored to the specific characteristics of each layer.
Experiments on the CIFAR-10 and CIFAR-100 datasets show that the proposed method outperforms state-of-the-art ANN-SNN conversion methods, especially under ultra-low-latency conditions (e.g., 2-4 time steps). For example, the authors achieve 93.72% top-1 accuracy on CIFAR-10 with just 2 time steps, significantly better than previous approaches.
The training overhead introduced by the noise induction is minimal, making the proposed method efficient and practical for deployment on neuromorphic hardware.
Stats
The paper does not provide specific numerical data points to support the key claims. However, it presents several figures and tables that illustrate the performance improvements achieved by the proposed method compared to state-of-the-art ANN-SNN conversion techniques.
Quotes
"The challenge of low-latency ANN-SNN conversion arises from conversion errors, which have been identified by previous studies [29; 1], resulting in a performance gap under low-latency conditions."
"We find that the conversion loss for low-latency SNN primarily stems from residual errors between quantized ANNs and converted SNNs."
"Explicitly modeling the residual error as a Gaussian noise with a zero mean and integrating the noise into the quantized activation of the source ANN during training, aiming to compensate for the gap between the source ANN and the converted SNN."