Achieving Adaptive Dynamic Regret in Non-stationary Online Convex Optimization
The authors propose novel online algorithms, Sword and Sword++, that can achieve problem-dependent dynamic regret bounds in non-stationary environments. The bounds scale with the gradient variation and the cumulative loss of the comparator sequence, which are at most O(T) but could be much smaller in benign environments, thereby outperforming the minimax optimal rate.