Variance-Reduced Proximal Gradient Algorithm for Stochastic Optimization with Constraints: A Non-Asymptotic Instance-Dependent Analysis
The core message of this paper is to provide a non-asymptotic, instance-dependent analysis of a variance-reduced proximal gradient (VRPG) algorithm for stochastic convex optimization under convex constraints. The algorithm's performance is shown to be governed by the scaled distance between the solutions of the given problem and a certain small perturbation of the given problem, both solved under the given convex constraints.