Core Concepts
Neural networks can be extended to accept inputs of any dimension by leveraging equivariance and representation stability to define a finite parameterization of an infinite sequence of equivariant layers.
Abstract
The key insights of this paper are:
Free Equivariant Neural Networks: The authors introduce the concept of "free" neural networks, which can be instantiated in any dimension. This is achieved by considering sequences of nested vector spaces and groups, and ensuring that the network layers are equivariant with respect to the group actions.
Representation Stability: The authors leverage the mathematical concept of "representation stability" to show that the dimensions of the spaces of equivariant linear layers often stabilize as the dimension increases. This allows for a finite parameterization of the infinite sequence of equivariant layers.
Computational Recipe: The authors provide a computational procedure to learn the free equivariant neural networks from data in a fixed dimension and then extend them to other dimensions. This involves finding a free basis for the equivariant linear layers and imposing a compatibility condition to ensure good generalization across dimensions.
The authors demonstrate the effectiveness of their approach through preliminary numerical experiments. The key contribution of this work is the introduction of a general, black-box framework for training neural networks that can handle inputs of arbitrary dimension, which is a common requirement in many scientific and engineering applications.