Core Concepts
A mixture of neural operators (MoNOs) can be used to approximate any uniformly continuous non-linear operator between Sobolev spaces, while controlling the complexity of individual expert neural operators to avoid the curse of dimensionality.
Abstract
The paper proposes a mixture of neural operators (MoNOs) model to approximate non-linear operators between Sobolev spaces. The key insights are:
The MoNO model distributes the parametric complexity of the operator approximation across a network of expert neural operators (NOs), organized in a tree structure.
Each expert NO in the MoNO has a small depth, width, and rank, which depend only polynomially on the desired approximation error and the modulus of continuity of the target operator. This avoids the exponential dependence on the input/output dimensions that plagues classical NOs.
The tree structure routes inputs to the appropriate expert NO, ensuring that the overall MoNO can approximate any uniformly continuous non-linear operator to any desired accuracy, while keeping the complexity of individual NOs manageable.
The authors provide explicit complexity estimates for the depth, width, rank, and number of expert NOs required in the MoNO to approximate a target operator to a given error tolerance. This demonstrates how the MoNO architecture softens the curse of dimensionality in operator learning.
The authors also derive new quantitative universal approximation results for classical NOs, which serve as the building blocks for the MoNO construction.