Equilibrium propagation (EP) is a promising alternative to backpropagation for training neural networks on biological or analog substrates, but requires weight symmetry and infinitesimal perturbations. We show that weight asymmetry introduces bias in the gradient estimates of generalized EP, and propose a homeostatic objective to improve the functional symmetry of the Jacobian, enabling EP to scale to complex tasks like ImageNet 32x32 without perfect weight symmetry.
The initial scale of the output function κ plays a pivotal role in governing the training dynamics of overparameterized neural networks, enabling rapid convergence to zero training loss irrespective of the specific initialization schemes employed.
Injecting structured noise during training of attractor neural networks can substantially improve their classification and generalization performance, approaching the capabilities of Support Vector Machines.