Core Concepts
Efficiently optimize convex functions using adaptive proximal algorithms without backtracking.
Abstract
The article introduces adaPGM, an adaptive proximal gradient method that eliminates the need for linesearch in convex optimization. It adapts step sizes based on local smoothness estimates, improving efficiency and handling nonsmooth terms. The method is extended to a primal-dual setting, introducing adaPDM for more general problems. The algorithm avoids evaluating the linear operator norm by incorporating a backtracking procedure efficiently. Numerical simulations demonstrate the effectiveness of these adaptive algorithms compared to traditional methods.
Stats
Backtracking linesearch is avoided in convex optimization.
AdaPGM adapts step sizes based on local smoothness estimates.
AdaPDM extends the method to handle more general problems.
The algorithm efficiently avoids evaluating the linear operator norm.
Quotes
"AdaPGM adapts step sizes based on local smoothness estimates."
"Numerical simulations demonstrate the effectiveness of the proposed algorithms."