Hierarchical concepts can be represented in layered spiking neural networks using multiple representative neurons per concept, providing fault-tolerance against neuron failures during the recognition process.
The authors propose a novel attention module called Projected-Full Attention (PFA) that leverages tensor decomposition to generate attention maps with flexible rank, enabling better adaptation to specific tasks. PFA outperforms existing attention-based SNN models on both static and dynamic benchmark datasets.
SpikeNAS, a novel fast memory-aware neural architecture search (NAS) framework for Spiking Neural Networks (SNNs), quickly finds an appropriate SNN architecture with high accuracy under the given memory budgets from autonomous mobile agents.
A novel methodology to systematically analyze the impact of key SNN parameters, including batch size, learning rate, threshold potential, and weight decay, and leverage this analysis to enhance SNN models for efficient autonomous driving systems.
A novel methodology that improves the accuracy of spiking neural networks (SNNs) through kernel size scaling, while considering the memory footprint for efficient deployment in embedded applications.