核心概念
The authors propose several improved techniques, including both the evaluation perspective and training perspective, to allow the likelihood estimation by diffusion ODEs to outperform the existing state-of-the-art likelihood estimators.
摘要
The content discusses improved techniques for maximum likelihood estimation for diffusion ordinary differential equation (ODE) models.
Key highlights:
- Diffusion ODEs are a particular case of continuous normalizing flows, which enables deterministic inference and exact likelihood evaluation. However, the likelihood estimation results by diffusion ODEs are still far from those of the state-of-the-art likelihood-based generative models.
- For training, the authors propose velocity parameterization and explore variance reduction techniques for faster convergence. They also derive an error-bounded high-order flow matching objective for finetuning, which improves the ODE likelihood and smooths its trajectory.
- For evaluation, the authors propose a novel training-free truncated-normal dequantization to fill the training-evaluation gap commonly existing in diffusion ODEs.
- The authors achieve state-of-the-art likelihood estimation results on image datasets without variational dequantization or data augmentation, surpassing previous ODE-based methods.
統計資料
The authors report the following key metrics:
Negative log-likelihood (NLL) in bits/dim on CIFAR-10 and ImageNet-32 datasets
Fréchet Inception Distance (FID) scores on CIFAR-10 and ImageNet-32 datasets
Number of function evaluations (NFE) during sampling on CIFAR-10 and ImageNet-32 datasets