Belangrijkste concepten
This paper proposes Hamiltonian Learning, a novel unified framework for learning with neural networks from a possibly infinite stream of data, in an online manner, without having access to future information.
Samenvatting
The paper presents Hamiltonian Learning (HL), a unified framework for neural computation and learning over time. HL leverages tools from optimal control theory to rethink the problem of learning from a continuous, possibly infinite, stream of data.
Key highlights:
HL is designed to learn in a forward manner, facing an initial-value problem, instead of a boundary-value problem. This allows learning without access to future information.
HL recovers popular gradient-based learning techniques like BackPropagation and BackPropagation Through Time by integrating differential equations with the Euler method and enforcing a sequential constraint to the update operations.
HL provides a uniform and flexible view of neural computation over a stream of data, which is fully local in time and space. This enables customizability in terms of parallelization, distributed computation, and memory-efficient BackPropagation.
The generality of HL is intended to provide researchers with a flexible framework that might open up novel achievements in learning over time, which is not as mature as offline learning.