Training normalizing flows with computationally intensive target probability distributions, particularly in lattice field theories, can benefit from the REINFORCE algorithm for faster convergence and reduced memory usage. The study showcases improved performance in the 2D Schwinger model with Wilson fermions at criticality compared to traditional methods like reparameterization trick.
Monte Carlo simulations are crucial computational tools across various fields, but critical slowing down near phase transitions limits their effectiveness. Machine learning techniques like normalizing flows offer solutions to generate independent configurations efficiently. The study focuses on the Neural Markov Chain Monte Carlo algorithm and its application to lattice field theories.
The standard stochastic gradient descent algorithm requires estimating gradients of loss functions, often based on the reparameterization trick. However, for complex target probability distributions like those in quantum chromodynamics with dynamical fermions, this approach may lead to performance degradation. The REINFORCE algorithm offers an alternative gradient estimator that avoids such issues.
By implementing the REINFORCE estimator in the 2D Schwinger model, significant improvements were observed in terms of training speed and memory efficiency compared to traditional methods. The study highlights the potential benefits of using advanced gradient estimators in machine learning applications involving computationally intensive target distributions.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Piotr Bialas... at arxiv.org 02-29-2024
https://arxiv.org/pdf/2308.13294.pdfDeeper Inquiries