The paper introduces free-form flows (FFF), a new approach to training normalizing flow models that removes the architectural constraints typically required for analytical invertibility and tractable Jacobian computations.
The key innovation is an efficient gradient estimator that allows training any dimension-preserving neural network as a generative model through maximum likelihood optimization. This is achieved by learning an approximate inverse of the encoder network, and using a reconstruction loss to ensure the encoder-decoder pair are close to being inverses.
The authors show theoretically that optimizing this relaxed objective has the same critical points as the original maximum likelihood objective, provided the reconstruction loss is minimized. They also prove that the solutions learned by optimizing the relaxed objective exactly match the data distribution.
Experimentally, the authors demonstrate the versatility of free-form flows. On a simulation-based inference benchmark, FFF models achieve competitive performance with minimal tuning. On molecule generation tasks, the authors leverage the flexibility to use equivariant graph neural networks, outperforming previous normalizing flow approaches in terms of sample quality and generation speed.
To Another Language
from source content
arxiv.org
Deeper Inquiries