The paper introduces an autonomous system with closed-loop damping, called (LD), for first-order convex optimization. While optimal convergence rates are typically achieved by non-autonomous methods with open-loop damping, the authors show that their closed-loop damping system (LD) exhibits a rate arbitrarily close to the optimal one.
The key aspects are:
The authors design the damping coefficient γ in the Inertial Damped Gradient (IDGγ) system using a Lyapunov function E, which is the sum of the function value and the squared velocity. This makes (LD) an autonomous system.
The authors prove that the Lyapunov function E is non-increasing along the trajectory of (LD), and they show that E converges to 0 as time goes to infinity. This implies that the function values converge to the optimal value.
By analyzing the rate of convergence of E, the authors prove that the function values converge at a rate arbitrarily close to the optimal rate of o(1/t^2), which is achieved by the non-autonomous Asymptotically Vanishing Damping (AVDa) system.
The authors also derive a practical algorithm, called LYDIA, by discretizing the (LD) system, and they provide theoretical guarantees for the convergence of the algorithm.
Numerical experiments are presented, supporting the theoretical findings and showing the advantages of the closed-loop damping approach compared to the open-loop damping.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Severin Maie... at arxiv.org 04-16-2024
https://arxiv.org/pdf/2311.10053.pdfDeeper Inquiries