Core Concepts
VISA optimizes forward KL-divergence efficiently.
Abstract
The article introduces Variational Inference with Sequential Sample-Average Approximations (VISA) as a method for approximate inference in computationally intensive models. VISA extends importance-weighted forward-KL variational inference by using a sequence of sample-average approximations within a trust region. This allows for the reuse of model evaluations across multiple gradient steps, reducing computational cost. Experiments on high-dimensional Gaussians, Lotka-Volterra dynamics, and Pickover attractor demonstrate that VISA achieves comparable accuracy to standard methods with significant computational savings. The paper also discusses the background of variational inference, reparameterized VI, and importance-weighted forward-KL VI. It presents the algorithm for VISA and its implementation details. The results from experiments on different models show the effectiveness of VISA in achieving convergence with fewer model evaluations compared to traditional methods.
Stats
"VISA can achieve comparable approximation accuracy to standard importance-weighted forward-KL variational inference with computational savings of a factor two or more."
"Savings of a factor two or more are realizable with conservatively chosen learning rates."
"VISA requires fewer evaluations per gradient step compared to IWFVI."
Quotes
"VISA can achieve comparable approximation accuracy to standard importance-weighted forward-KL variational inference with computational savings of a factor two or more."
"Savings of a factor two or more are realizable with conservatively chosen learning rates."
"VISA requires fewer evaluations per gradient step compared to IWFVI."