Core Concepts
An unsupervised Bayesian training approach is proposed to learn convex neural network regularizers using only a fixed noisy dataset, based on a dual Markov chain estimation method. The learned regularizers demonstrate close performance to supervised adversarial regularization methods on natural image Gaussian deconvolution and Poisson denoising tasks.
Abstract
The content discusses an unsupervised Bayesian training approach for learning convex neural network regularizers to solve inverse imaging problems, such as image denoising and deconvolution.
Key highlights:
- Inverse imaging problems are often ill-posed, requiring the use of regularized reconstruction operators. Variational regularization combines a data fidelity term with priors like wavelet or total variation priors.
- The authors propose an unsupervised Bayesian training approach to learn convex neural network regularizers using only a fixed noisy dataset, without requiring clean ground truth data.
- The approach is based on a dual Markov chain estimation method that jointly maximizes the prior and posterior distributions.
- Experiments on Gaussian deconvolution and Poisson denoising tasks show the learned unsupervised regularizers perform close to supervised adversarial regularization methods.
- The unsupervised regularizers also demonstrate better generalization properties compared to end-to-end deep learning methods when transferred to a different forward operator.
The authors extend previous work on Bayesian estimation of regularization parameters to the more general case of learning a convex neural network regularizer with a significantly larger number of parameters. The proposed algorithm and convergence analysis provide a framework for unsupervised training of image priors in high-dimensional inverse problems.
Stats
The initial PSNR of the corrupted image is 22.38dB for Gaussian deconvolution and 21.16dB for Poisson denoising.
Quotes
"Unsupervised learning is a training approach in the situation where ground truth data is unavailable, such as inverse imaging problems."
"Variational models combine a data fidelity term with priors such as wavelet priors or total variation priors to define the reconstruction of a measurement."
"The variational formulation can be formulated as a special case of the Bayesian formulation, where the negative prior log-density and negative log-likelihood correspond to the regularization and fidelity terms respectively."