Core Concepts
This work proposes a methodology to directly approximate backstepping gains using neural operators, bypassing the need to approximate the full backstepping kernels. This approach simplifies the operator being approximated and the training of its neural approximation, with an expected reduction in the neural network size.
Abstract
The content presents a novel approach to efficiently compute gains for model-driven control laws using the backstepping method for partial differential equations (PDEs). The key idea is to directly approximate the backstepping gain function, rather than the full backstepping kernel function, using neural operators.
The main highlights are:
The gain-only approximation approach simplifies the operator being approximated and the training of its neural approximation, leading to a reduced neural network size compared to the previous full-kernel approximation approach.
The gain-only approach induces a more "unforgiving" perturbation in the target system, acting at the boundary condition rather than in the domain. This requires a more involved Lyapunov analysis, but retains the stability guarantees.
The gain-only approach appears inapplicable to adaptive control applications, where the approximate kernel is needed, but is likely applicable to gain scheduling applications.
The content provides theoretical results on the stability of the closed-loop system under the neural operator-approximated backstepping gains for 1D hyperbolic PDEs, Dirichlet reaction-diffusion PDEs, and Neumann reaction-diffusion PDEs.
The key challenge common to both the gain-only and full-kernel approaches is the generation of the training set, which requires numerically solving the full 2D backstepping kernel equations.