Core Concepts
Hierarchical concepts can be represented in layered spiking neural networks using multiple representative neurons per concept, providing fault-tolerance against neuron failures during the recognition process.
Abstract
The paper describes how hierarchical concepts can be represented in three types of layered spiking neural networks: feed-forward networks with high connectivity, feed-forward networks with low connectivity, and layered networks with low connectivity and lateral edges. The key insights are:
Using multiple representative neurons (reps) per concept, rather than a single rep, provides fault-tolerance against neuron failures during the recognition process.
For feed-forward networks with high connectivity, the paper shows that recognition can work correctly with high probability, even if some randomly-chosen neurons fail. The probability of correct recognition increases with the number of reps per concept and decreases with the probability of neuron failure.
For feed-forward networks with lower connectivity, the paper extends the results by requiring that at least a certain fraction of the reps of each child concept be connected to each rep of a parent concept.
For networks with low connectivity and lateral edges, the paper further extends the results by allowing fewer edges from child to parent reps, compensating with lateral edges within layers.
The paper also discusses how these multi-rep representations could be learned, using approaches inspired by work on the assembly calculus.