The paper presents a novel spiking neural network (SNN) model enhanced with active dendrites to efficiently mitigate catastrophic forgetting in temporally-encoded SNNs. The key highlights are:
The proposed neuron model exploits the properties of time-to-first-spike (TTFS) encoding and its high sparsity to introduce a dendritic-dependent spike time delay mechanism. This allows for context-dependent modulation of neuron activity, similar to the behavior of active dendrites in biological neurons.
The authors leverage the "dead neurons" problem in TTFS-encoded networks to intrinsically implement a gating mechanism, avoiding the need for a dedicated layer as in previous works. This dynamic gating allows for the emergence of different sub-networks for different tasks, reducing interference and mitigating catastrophic forgetting.
The model is evaluated on the Split MNIST dataset, demonstrating a test accuracy of 88.3% across sequentially learned tasks, a significant improvement over the same network without active dendrites.
The authors also propose a novel digital hardware architecture for TTFS-encoded SNNs with active dendrites, which can perform inference with an average time of 37.3 ms while fully matching the results from the quantized software model.
The work showcases an effective approach to enable efficient continual learning in energy-efficient TTFS-encoded spiking neural networks, paving the way for their deployment in real-world edge computing scenarios.
Till ett annat språk
från källinnehåll
arxiv.org
Djupare frågor