The paper addresses the open question of whether NAG and FISTA exhibit linear convergence for strongly convex functions, without requiring any prior knowledge of the strongly convex modulus. The key contributions are:
The paper establishes the linear convergence of NAG for strongly convex functions by formulating an innovative Lyapunov function that incorporates a dynamically adapting coefficient for the kinetic energy. This linear convergence is shown to be independent of the parameter r.
The paper refines a key inequality associated with strong convexity to encompass the proximal setting, reconciling the theoretical delineation between smooth and composite optimization. Using the implicit-velocity phase-space representation, the Lyapunov function guarantees the linear convergence of function values within FISTA and the square of the proximal subgradient norm.
The paper provides an intuitive analysis on a quadratic function, demonstrating how both the function value and the square of the gradient norm converge linearly in NAG, with the linear convergence rate being rarely influenced by the choice of the parameter r.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Bowen Li,Bin... at arxiv.org 04-10-2024
https://arxiv.org/pdf/2306.09694.pdfDeeper Inquiries