This research paper introduces novel stochastic splitting algorithms designed to efficiently solve large-scale composite monotone inclusion problems, guaranteeing almost-sure convergence of iterates to a solution without requiring prior knowledge of linear operator norms or imposing restrictive regularity assumptions.
The λ-SAGA algorithm, a generalization of the SAGA algorithm, exhibits almost sure convergence and demonstrably reduces asymptotic variance compared to traditional SGD, even without assuming strong convexity or Lipschitz gradient conditions.
Tuning-free algorithms can match the performance of optimally-tuned optimization algorithms with only loose hints on problem parameters.
新しいクラスのLangevinベースのアルゴリズムは、深層学習における勾配の課題を克服する。
This paper explores the relationship between optimization and generalization in deep learning by considering the stochastic nature of optimizers. By analyzing populations of models rather than single models, it sheds light on the performance of various algorithms.