The author explores the dimension dependency of computational complexity in high-dimensional sampling problems, focusing on the benefits of biased samplers over unbiased ones. The modified Langevin algorithm with prior diffusion is shown to achieve dimension-independent convergence for a broader class of target distributions.
Prior diffusion in Langevin algorithms can achieve dimension-independent convergence for non-log-concave distributions.
Prior diffusion in Langevin algorithms can achieve dimension-independent convergence for a broader class of target distributions beyond log-concavity.