Core Concepts
This work proposes a novel approach to learn monotone neural networks and apply them to solving non-linear inverse problems, leveraging the properties of monotone operators to provide convergence guarantees.
Abstract
The key highlights and insights from the content are:
The authors introduce a novel approach to learning monotone neural networks through a newly defined penalization loss. This is particularly effective in solving classes of variational problems, specifically monotone inclusion problems, commonly encountered in image processing tasks.
The Forward-Backward-Forward (FBF) algorithm is employed to address these monotone inclusion problems, offering a solution even when the Lipschitz constant of the neural network is unknown. The FBF algorithm provides convergence guarantees under the condition that the learned operator is monotone.
Building on plug-and-play methodologies, the authors aim to apply these newly learned monotone operators to solving non-linear inverse problems. They first formulate the problem as a variational inclusion problem, then train a monotone neural network to approximate the non-monotone operator.
The authors provide simulation examples where the non-linear inverse problem is successfully solved by leveraging the learned monotone neural network and the FBF algorithm.
The key technical contributions include: (i) a characterization of differentiable monotone operators through the Jacobian of the operator, (ii) a penalized training approach to enforce monotonicity of the neural network, and (iii) an efficient implementation of the penalization computation using power iteration methods.