核心概念
The proposed adaptive linearized alternating direction multiplier method improves the convergence rate of the algorithm by dynamically selecting the regular term coefficients based on the current iteration point, without compromising the convergence.
摘要
The paper proposes an adaptive linearized alternating direction multiplier method (ALALM) to solve convex optimization problems with linear constraints. The key innovation is the use of adaptive techniques to dynamically select the regular term coefficients, which allows for faster convergence compared to traditional linearized ADMM methods.
The main steps of the ALALM algorithm are:
- Initialize the algorithm parameters, including the penalty parameter β, initial regular term coefficient τ0, and adaptive parameters.
- Perform the main iterations, which involve:
- Solving the x-subproblem using the current iterate.
- Solving the y-subproblem using a linearized formulation with an adaptive regular term coefficient.
- Updating the Lagrange multiplier.
- Adaptively update the regular term coefficient based on the current iterate. This is done by checking certain conditions related to the y-subproblem solution.
- Continue the iterations until the stopping criteria are met.
The paper provides a rigorous convergence analysis for the proposed ALALM algorithm, proving that the iterates converge to a solution of the original convex optimization problem. Numerical experiments on the LASSO problem demonstrate the improved performance of ALALM compared to the traditional linearized ADMM method.
統計資料
1/2 ∥x - b∥₂² + ι ∥y∥₁
x = Ay