Core Concepts
Proposing Diffusive Gibbs Sampling (DiGS) as an innovative method for sampling multi-modal distributions effectively.
Abstract
The content introduces Diffusive Gibbs Sampling (DiGS) as a novel approach to address the inadequate mixing of conventional Markov Chain Monte Carlo methods for multi-modal distributions. DiGS leverages diffusion models and Gaussian convolution to bridge disconnected modes in distributions, demonstrating improved results across various tasks.
1. Introduction:
Generating samples from complex unnormalized probability distributions is crucial in machine learning, statistics, and natural sciences.
The goal is to draw independent samples from the target distribution and estimate expectations of functions under the target distribution.
1.1. Score-Based MCMC Methods:
Unadjusted Langevin Algorithm (ULA) follows a transition rule based on a discrete-time Langevin SDE.
Metropolis-adjusted Langevin Algorithm (MALA) corrects bias using the Metropolis-Hasting algorithm.
Hamiltonian Monte Carlo (HMC) augments the variable with an auxiliary momentum variable v.
1.2. Convolution-Based Method:
Gaussian convolution is used to bridge disconnected modes in multi-modal sampling.
2. Diffusive Gibbs Sampling:
DiGS uses a Gibbs sampler to sample from the joint distribution p(x, ˜x).
Alternately draws samples from p(˜x|x) and p(x|˜x).
2.1. Initialization of the Denoising Sampling Step:
Strategies for selecting parameters α and σ are discussed for practical scenarios.
2.2. Choosing the Gaussian Convolution Kernels:
The performance of DiGS is influenced by hyperparameters α and σ in the Gaussian convolution kernel.
2.3. Multi-Level Noise Scheduling:
Introduces a sequence of Gaussian convolution kernels with varying α values to mitigate precise hyperparameter selection needs.
Stats
DiGSは、複数のモードをサンプリングするための革新的な手法です。