The authors introduce a data-free paradigm for solving parametric partial differential equations (PDEs) using neural networks. Traditional numerical solvers like finite difference methods (FDM) and finite element methods (FEM) can be time-consuming, especially for complex PDE systems. Recent data-driven methods have tried to learn the solution operators from simulated data, but this creates a "chicken-egg" dilemma where the training data requires running time-consuming numerical solvers.
To address this, the authors propose constructing a mean squared residual (MSR) loss directly from the discretized PDE, without needing any simulated data. This MSR loss encodes the physical constraints of the PDE into the learning process. However, the authors find that most modern neural network architectures perform poorly when trained with the MSR loss, as it requires the network to model long-range spatial entanglements whose patterns vary across different PDEs.
To handle this, the authors design a new neural network architecture called LordNet. LordNet establishes the global entanglements using a low-rank decomposition with simple fully connected layers, which is flexible enough to model the entanglements of various PDEs efficiently.
The authors evaluate LordNet on two representative PDEs - Poisson's equation and the Navier-Stokes equation. They show that the combination of the MSR loss and the LordNet architecture outperforms both data-driven neural operators and other neural network architectures trained with the MSR loss. For the Navier-Stokes equation, the learned LordNet operator is over 50 times faster than the finite difference solution with the same computational resources.
To Another Language
from source content
arxiv.org
Дополнительные вопросы