The paper analyzes a stochastic model for supervised learning in biological neural networks (BNNs). It starts by reviewing the Schmidt-Hieber model, which shows that the local updating rule in BNNs corresponds to a zero-order optimization procedure on average.
The authors then propose a modification to the model, where each learning opportunity triggers a large number of spikes and parameter updates, rather than just one. With this change, the authors show that the updates approximately correspond to a continuous gradient descent step. This suggests that stochastic gradient descent may indeed be a plausible mechanism for learning in BNNs, even though the learning process relies only on local information and does not explicitly compute gradients.
The key insights are:
In un'altra lingua
dal contenuto originale
arxiv.org
Approfondimenti chiave tratti da
by Söre... alle arxiv.org 04-11-2024
https://arxiv.org/pdf/2309.05102.pdfDomande più approfondite