核心概念
Prior diffusion in Langevin algorithms can achieve dimension-independent convergence for a broader class of target distributions beyond log-concavity.
要約
The paper discusses the dimension dependency of computational complexity in high-dimensional sampling problems. It introduces the modified Langevin algorithm with prior diffusion to achieve dimension-independent convergence for a broader class of target distributions beyond log-concavity. The analysis focuses on the convergence of KL divergence with different step size schedules, providing insights into faster sampling algorithms. The content is structured as follows:
- Abstract
- Introduction
- Sampling from unnormalized distribution
- Langevin algorithms and their popularity
- Sampling algorithms categorized by dimension dependency
- Freund et al.'s suggestion on dimension-independent convergence
- Prior diffusion for Gaussian mixtures
- Theoretical results on KL convergence with fixed and varying step sizes
- Proof sketch and discussion on specific examples
- Conclusions and future work
統計
Freund et al. (2022) suggest that the convergence rate of the modified Langevin dynamics only depends on the trace of log-likelihood Hessian.
The convergence rate of LAPD only depends on the number of mixture components K and the radius of means Rµ.
引用
"Understanding the dimension dependency of computational complexity in high-dimensional sampling problem is a fundamental problem."
"LAPD can be considered as a more general version of ULA and is able to achieve a faster convergence by properly tuning m."