Core Concepts
The authors propose a method, tLaSDI, that embeds thermodynamic principles into latent space dynamics using the GENERIC formalism and neural networks. The approach ensures energy and entropy functions in the latent space while achieving robust generalization.
Abstract
The content introduces a data-driven method, tLaSDI, embedding thermodynamic principles into latent dynamics through neural networks. It presents numerical examples demonstrating robust extrapolation ability and correlation between entropy production rates and solution behavior.
The study compares tLaSDI with other methods in predicting dynamical systems like Burgers' equation. It highlights the importance of simultaneous training for improved performance. The abstract error estimate provides a theoretical foundation for the proposed loss function.
Key points include the use of autoencoders for dimension reduction, application of GENERIC formalism to design latent dynamics, and introduction of a new loss formulation involving Jacobian computation. Numerical experiments showcase tLaSDI's superior performance in extrapolation tasks.
Stats
The number of parameters for hyper-autoencoder and GFINNs in tLaSDI are approximately 905K and 35K respectively.
tLaSDI is trained by Adam optimizer for 42K iterations.