Guaranteed Approximation Bounds and Efficient Mixed-Precision Training for Neural Operators
Neural operators, such as Fourier Neural Operators (FNO), can learn solution operators for partial differential equations (PDEs) and other function space mappings. However, training these models is computationally intensive, especially for high-resolution problems. This work introduces the first mixed-precision training method for neural operators, which significantly reduces GPU memory usage and improves training throughput without sacrificing accuracy.