The content presents a computational model of how recurrent neural networks can learn an internal model of sensory experiences and spontaneously replay this model in the absence of external stimuli. The key insights are:
The model proposes a synaptic plasticity mechanism that learns to predict the response of each neuron based on its afferent and recurrent inputs. This allows the network to self-organize cell assemblies that encode the statistical structure of salient sensory events.
The spontaneous activity of the trained network reproduces the probability structure of the previously experienced sensory stimuli, with the relative firing rates of the cell assemblies matching the relative probabilities of the corresponding stimuli.
The model demonstrates that this spontaneous replay of the learned internal model can account for behavioral biases observed in perceptual decision-making tasks, where prior experiences with unequal stimulus probabilities influence the subjects' choices.
The model also predicts the emergence of two distinct types of inhibitory connections - one for lateral inhibition between cell assemblies and another for desynchronizing neurons within each assembly. Both types of inhibitory connections are crucial for the robust learning and replay of the internal model.
The proposed learning mechanism is shown to work in both a simplified network model and a more realistic model with distinct excitatory and inhibitory neuron populations, obeying Dale's law.
To Another Language
from source content
biorxiv.org
Ключові висновки, отримані з
by Asabuki,T., ... о www.biorxiv.org 02-18-2023
https://www.biorxiv.org/content/10.1101/2023.02.17.528958v1Глибші Запити