Energy-Calibrated Variational Autoencoder Outperforms State-of-the-Art Generative Models with Efficient Single-Step Sampling
The proposed Energy-Calibrated Variational Autoencoder (EC-VAE) utilizes a conditional Energy-Based Model (EBM) to calibrate the generative direction of a Variational Autoencoder (VAE) during training, enabling it to generate high-quality samples without requiring expensive Markov Chain Monte Carlo (MCMC) sampling at test time. The energy-based calibration can also be extended to enhance variational learning and normalizing flows, and applied to zero-shot image restoration tasks.