This article proposes a novel gradient-based online optimization framework for solving stochastic programming problems that frequently arise in the context of cyber-physical and robotic systems. The framework encompasses both gradient descent and quasi-Newton methods, and provides convergence guarantees even in the presence of modeling errors.
This survey provides a comprehensive introduction to the rapidly developing field of risk measures and their applications in diverse areas, including engineering design, data-driven problems, and decision making under uncertainty. It highlights the central role of superquantiles (conditional value-at-risk) in unifying various threads and connecting concepts of risk, regret, deviation, and error.
This work proposes a unified framework for analyzing a broad class of Markov chains, called Ito chains, which can model various sampling, optimization, and boosting algorithms. The authors provide bounds on the discretization error between the Ito chain and the corresponding Ito diffusion in the W2 distance, under weak and general assumptions on the chain's terms, including non-Gaussian and state-dependent noise.
This paper presents a unified approach for the theoretical analysis of first-order gradient methods for stochastic optimization and variational inequalities under Markovian noise. The proposed methods achieve optimal (linear) dependence on the mixing time of the underlying noise sequence and eliminate limiting assumptions of previous research.
The core message of this paper is to provide a non-asymptotic, instance-dependent analysis of a variance-reduced proximal gradient (VRPG) algorithm for stochastic convex optimization under convex constraints. The algorithm's performance is shown to be governed by the scaled distance between the solutions of the given problem and a certain small perturbation of the given problem, both solved under the given convex constraints.
The authors propose a method called U-DoG that achieves near-optimal rates for smooth stochastic convex optimization without requiring prior knowledge of problem parameters such as smoothness, noise magnitude, or initial distance to optimality.
The author presents Push-LSVRG-UP as a distributed stochastic optimization algorithm for large-scale convex finite-sum optimization problems over unbalanced directed networks, emphasizing accelerated linear convergence and reduced computational complexity.
ALEXR introduces an efficient single-loop primal-dual block-coordinate proximal algorithm for convex finite-sum coupled compositional stochastic optimization problems, achieving optimal convergence rates and expanding the realm of solvable challenges.