Belangrijkste concepten
Stochastic gradient descent may indeed play a role in optimizing biological neural networks, even though the learning process relies only on local information.
Samenvatting
The paper analyzes a stochastic model for supervised learning in biological neural networks (BNNs). It starts by reviewing the Schmidt-Hieber model, which shows that the local updating rule in BNNs corresponds to a zero-order optimization procedure on average.
The authors then propose a modification to the model, where each learning opportunity triggers a large number of spikes and parameter updates, rather than just one. With this change, the authors show that the updates approximately correspond to a continuous gradient descent step. This suggests that stochastic gradient descent may indeed be a plausible mechanism for learning in BNNs, even though the learning process relies only on local information and does not explicitly compute gradients.
The key insights are:
The original Schmidt-Hieber model shows that the local updates in BNNs correspond to a zero-order optimization method on average, which has slow convergence.
By assuming many updates per learning opportunity, the authors show that the updates approximate a continuous gradient descent step.
This indicates that stochastic gradient descent may play a role in optimizing BNNs, even without explicitly computing gradients.
The randomness in the updates is a crucial component, as it allows the system to converge to the gradient descent dynamics.