Accelerated Stochastic Optimization with No Prior Knowledge of Problem Parameters
The authors propose a method called U-DoG that achieves near-optimal rates for smooth stochastic convex optimization without requiring prior knowledge of problem parameters such as smoothness, noise magnitude, or initial distance to optimality.