The paper presents a new type of artificial spiking neuron called the Spiking Recurrent Cell (SRC), which is derived from the well-known Gated Recurrent Unit (GRU) cell. The key advantage of SRC is that its equations are fully differentiable, allowing the use of standard backpropagation for training.
The authors first describe the spike generation mechanism of the SRC, which is based on a mix of positive and negative feedbacks to create natural spike-like events, unlike the binary spikes of traditional spiking neurons. The SRC also includes an input integration mechanism using leaky integrators.
The SRC-based spiking neural networks are then evaluated on neuromorphic versions of the MNIST, Fashion-MNIST, and Neuromorphic-MNIST datasets. The results show that SRC networks can achieve comparable performance to other spiking neural networks, while being easier to train, especially for deeper architectures.
The authors demonstrate that SRC networks with up to 15 layers can be trained successfully, while traditional spiking networks with LIF neurons struggle to learn when the depth increases. This highlights the advantage of the differentiable nature of the SRC, which allows efficient backpropagation-based training of deep spiking neural networks.
The paper concludes by discussing potential future research directions, such as exploring more biologically-inspired behaviors in the SRC neuron and applying it to more complex tasks.
翻譯成其他語言
從原文內容
arxiv.org
深入探究