Основні поняття
PDEformer, a neural solver capable of simultaneously addressing various types of one-dimensional partial differential equations, leverages a computational graph representation to seamlessly integrate symbolic and numerical information, and employs a graph Transformer and an implicit neural representation to generate mesh-free predicted solutions.
Анотація
The paper introduces PDEformer, a neural solver for partial differential equations (PDEs) that can handle various types of PDEs. The key aspects are:
- Representation of the PDE in the form of a computational graph, which facilitates the integration of both symbolic and numerical information inherent in a PDE.
- Utilization of a graph Transformer and an implicit neural representation (INR) to generate mesh-free predicted solutions.
- Pretraining on a diverse dataset of PDEs, which enables PDEformer to achieve zero-shot accuracies on benchmark datasets that surpass those of adequately trained expert models.
- Demonstration of promising results in the inverse problem of PDE coefficient recovery.
The experiments are currently limited to one-dimensional PDEs, but the authors believe this work serves as a noteworthy milestone towards building a foundation PDE model with high generality.
Статистика
The pretraining dataset contains 500k samples of one-dimensional time-dependent PDEs with random coefficients and initial conditions.
The PDEBench datasets used for evaluation have 10k samples each, with 9k used for training and 1k for testing.
Цитати
"Drawing inspirations from successful experiences in natural language processing and computer vision, we aim to develop a foundation PDE model with the highest generality, capable of handling any PDE in the ideal case."
"Different from previous approaches, we propose to express the symbolic form of the PDE as a computational graph, ensuring that the resulting graph structure, along with its node types and feature vectors, encapsulate all the symbolic and numeric information necessary for solving the PDE."
"Notably, for the Burgers' equation with ν = 0.1 and 0.01, the zero-shot PDEformer outperforms the sufficiently trained FNO model."