Core Concepts
Random Shallow ReLU Networks can efficiently approximate functions for adaptive control applications.
Abstract
Neural networks with single hidden layers are commonly used in adaptive control.
Approximation properties for control with neural networks are assumed but not proven.
Lamperski and Lekang aim to show that ReLU networks with random weights achieve accurate approximations.
The paper introduces a new integral representation theorem for ReLU activations and smooth functions.
The results can be applied to construct neural networks for model reference adaptive control problems.
Theoretical properties of neural networks with random initializations have been extensively studied.
The paper addresses the gap in proving the required approximation properties for adaptive control.
Theoretical challenges include quantifying the effects of smoothness in high dimensions and relaxing smoothness requirements.
Stats
"ReLU networks with randomly generated weights and biases achieve L8 error of Opm´1{2q with high probability."
"The worst-case error on balls around the origin decays like Opm´1{2q, where m is the number of neurons."
"The bounds simplify for the uniform distribution over Sn´1 ˆ r´R, Rs."
Quotes
"Neural networks are regularly employed in adaptive control of nonlinear systems and related methods of reinforcement learning."
"The main contribution of this paper shows that two-layer neural networks with ReLU activation functions can approximate sufficiently smooth functions on bounded sets to arbitrary accuracy."