Core Concepts
Learned proximal networks (LPNs) are a new class of deep neural networks that exactly implement the proximal operator of a general learned function, enabling the recovery of the underlying data distribution's log-prior in an unsupervised manner. LPNs can be used to solve general inverse problems with convergence guarantees.
Abstract
The content discusses a new framework for learning proximal operators of general (potentially non-convex) functions using deep neural networks, termed learned proximal networks (LPNs). The key insights are:
LPNs are parameterized as gradients of convex functions, guaranteeing that they implement exact proximal operators. This allows for convergence guarantees when used in iterative optimization schemes like Plug-and-Play ADMM.
A new training strategy called proximal matching is proposed, which provably promotes the recovery of the log-prior of the true data distribution from i.i.d. samples, without requiring access to ground-truth proximal operators.
The ability to recover the underlying regularizer (log-prior) associated with the learned proximal operator provides interpretability and insights into the learned priors, which are demonstrated on synthetic and real-world datasets.
Experiments on image deblurring, sparse-view CT reconstruction, and compressed sensing show that LPNs achieve state-of-the-art performance while providing the additional benefit of interpretability.
Stats
The content does not contain any explicit numerical results or statistics. The key insights are conceptual and theoretical in nature.