Core Concepts
Efficiently control parametric PDEs using sparse polynomial policies.
Abstract
This content discusses the application of deep reinforcement learning to control parametric partial differential equations (PDEs) efficiently. It introduces a method that leverages dictionary learning and differentiable L0 regularization to learn sparse, robust, and interpretable control policies for parametric PDEs. The approach is tested on controlling parametric Kuramoto-Sivashinsky and convection-diffusion-reaction PDEs, showcasing superior performance compared to baseline methods.
Introduction
Optimal control of PDEs in engineering and science.
Challenges in computational efficiency and adaptability.
Reinforcement Learning
RL for solving sequential decision-making problems.
Value-based, policy-based, and actor-critic algorithms.
Sparse Dictionary Learning
Approximating nonlinear functions with linear combinations.
Sparse identification of nonlinear dynamics method (SINDy).
Sparsifying Neural Network Layers with L0 Regularization
Differentiable L0 regularization for sparsity.
Deep Reinforcement Learning with L0-Sparse Polynomial Policies
Combining DRL with sparse polynomial policies.
Results: Kuramoto-Sivashinsky PDE
Training and evaluation results showing the superiority of the L0-sparse polynomial TD3 agent.
Results: Convection-Diffusion-Reaction PDE
Training and evaluation results demonstrating the effectiveness of the L0-sparse polynomial TD3 agent.
Stats
"Our sparse policy architecture is agnostic to the DRL method."
"The choice of α = 0.1 is dictated by the need for balancing the contribution of state-tracking cost c1 and control-effort cost c2."
Quotes
"Our sparse policy architecture is agnostic to the DRL method."
"The choice of α = 0.1 is dictated by the need for balancing the contribution of state-tracking cost c1 and control-effort cost c2."