The paper analyzes a stochastic model for supervised learning in biological neural networks (BNNs). It starts by reviewing the Schmidt-Hieber model, which shows that the local updating rule in BNNs corresponds to a zero-order optimization procedure on average.
The authors then propose a modification to the model, where each learning opportunity triggers a large number of spikes and parameter updates, rather than just one. With this change, the authors show that the updates approximately correspond to a continuous gradient descent step. This suggests that stochastic gradient descent may indeed be a plausible mechanism for learning in BNNs, even though the learning process relies only on local information and does not explicitly compute gradients.
The key insights are:
他の言語に翻訳
原文コンテンツから
arxiv.org
深掘り質問