The author introduces Polygonal Unadjusted Langevin Algorithms as a solution to the exploding and vanishing gradient problems in deep learning, providing stability and efficiency.
The author introduces a new storage-optimal first-order method for solving semidefinite programs with low-rank solutions, strict complementarity, and known subspace restrictions.
The authors propose novel algorithms for solving nonconvex minimax problems with coupled linear constraints, providing iteration complexity guarantees.
The authors analyze the convergence of Projected Gradient Descent at Bouligand stationary points, proving their properties under specific conditions.
FOSI is a novel meta-algorithm that enhances first-order optimizers by incorporating second-order information efficiently.
The author introduces a non-convex relaxation approach for the chance-constrained binary knapsack problem, providing upper bounds as tight as other continuous relaxations. A polynomial-time algorithm is proposed to solve this relaxation efficiently.
The author introduces the Sharpened Lazy Incremental Quasi-Newton Method (SLIQN) to address shortcomings in existing incremental methods, achieving explicit superlinear convergence and superior empirical performance at a per-iteration O(d2) cost.
Improving the Adam optimizer through IMEX methods enhances neural network training.
Optimization algorithms' convergence rates are formally analyzed using the Lean4 theorem prover, enhancing mathematical representation.
Proposing a novel algorithm combining reinforcement learning and evolutionary strategies to solve the latency location routing problem efficiently.