Alapfogalmak
Investigating theoretical guarantees of Optimistic Online Mirror Descent for the SEA model with smooth expected loss functions.
Kivonat
The study explores the Stochastically Extended Adversarial (SEA) model, bridging stochastic and adversarial online convex optimization. It introduces optimistic Online Mirror Descent (OMD) for the SEA model with smooth expected loss functions. The research provides regret bounds for convex, strongly convex, and exp-concave functions. Results show improved bounds compared to previous works by Sachs et al. (2022). The analysis includes assumptions on gradient norms, domain boundedness, maximal variance, smoothness of expected functions, convexity, and strong convexity. The study presents novel results for exp-concave functions not previously explored.
Statisztikák
Under the smoothness condition on expected loss functions, it is shown that the expected static regret of optimistic Follow-The-Regularized-Leader (FTRL) depends on cumulative stochastic variance σ2 1:T and cumulative adversarial variation Σ2 1:T.
For strongly convex and smooth functions, an O(1/λ(σ2max + Σ2max) log(σ21:T + Σ21:T)/σ2max + Σ2max) bound is established.
For exp-concave and smooth functions, a new O(d log(σ21:T + Σ21:T)) bound is derived.
Idézetek
"Optimistic OMD enjoys the same regret bound as Sachs et al. (2022), but under weaker assumptions."
"Our result shows advantages in benign problems with small cumulative quantities σ21:T and Σ21:T."
"The study provides novel results for exp-concave functions not previously explored."