핵심 개념
FeNNol is a new library for building, training, and running efficient and flexible force-field-enhanced neural network potentials.
초록
The paper presents FeNNol, a new Python library designed for building, training, and running machine-learning potentials, with a particular focus on physics-enhanced neural networks. FeNNol provides a flexible and modular system that allows users to easily build custom models, enabling the combination of state-of-the-art atomic embeddings with ML-parameterized physical interaction terms, without the need for explicit programming.
The key highlights of the FeNNol library include:
- Leveraging the Jax framework and its just-in-time compilation capabilities to enable fast evaluation of neural network potentials, shrinking the performance gap between ML potentials and standard force-fields.
- Providing a collection of efficient and configurable modules that can be composed to form complex models, including preprocessing modules for handling operations on neighbor lists, atomic embeddings, chemical and radial encodings, physics modules, neural networks, and operation modules.
- Introducing the "CRATE" multi-paradigm embedding that combines chemical and geometric information from different sources, allowing users to tailor the architecture for their data and computational efficiency requirements.
- Offering a training system that enables users to define complex models and train them on generic tasks, including support for multi-stage training and transfer learning.
- Providing multiple ways to run molecular dynamics simulations with FeNNix models, including custom Python scripts, the Atomic Simulation Environment (ASE) calculator, the Tinker-HP MD engine, and FeNNol's native MD engine.
The authors demonstrate the performance of FeNNol's models and native MD engine by showing that their implementation of the popular ANI-2x model reaches simulation speeds close to the optimized GPU-accelerated Tinker-HP implementation of the AMOEBA force-field on commodity GPUs.
통계
FeNNol's implementation of the ANI-2x model reaches simulation speeds nearly on par with the AMOEBA polarizable force-field on commodity GPUs.
FeNNol's native MD engine achieves a factor of three performance increase compared to running the model with the ASE MD engine for smaller systems.
Using a neighbor list "skin" and reconstructing the full neighbor list only once every 40 fs (80 steps) further improves performance, reaching levels close to the AMOEBA force field on smaller systems.
인용구
"FeNNol leverages the automatic differentiation and just-in-time compilation features of the Jax Python library to enable fast evaluation of NNPs, shrinking the performance gap between ML potentials and standard force-fields."
"FeNNol provides a flexible and modular system that allows users to easily build custom models, allowing for example the combination of state-of-the-art atomic embeddings with ML-parameterized physical interaction terms, without the need for explicit programming."
"We hope that FeNNol will facilitate the development and application of new hybrid NNP architectures for a wide range of molecular simulation problems."