The paper considers the problem of online convex optimization where the objective function is time-varying. It extends coordinate descent algorithms to the online case, where the objective function changes after a finite number of iterations. Instead of solving the problem exactly at each time step, the algorithm only applies a finite number of coordinate descent iterations.
The key highlights and insights are:
The paper provides regret analysis for both random and deterministic online coordinate descent algorithms under convex and strongly convex settings. It derives static and dynamic regret bounds for these algorithms.
For random coordinate descent, the static regret bounds are O(√T) for convex functions and O(log T) for strongly convex functions. The dynamic regret bounds are O(√CT T) for convex functions and O(CT) for strongly convex functions, where CT captures the variation of the optimal solutions over time.
For deterministic coordinate descent algorithms, the regret bounds are shown to be comparable to those of online gradient descent algorithms under similar assumptions. The paper relates the regret of the coordinate descent algorithms to the regret of the online projected gradient descent algorithm.
The regret bounds achieved by the online coordinate descent algorithms are consistent with the existing literature on centralized full-gradient based online optimization algorithms.
The paper demonstrates that coordinate descent algorithms can be effectively extended to the online optimization setting and provide competitive performance compared to gradient-based methods.
Ke Bahasa Lain
dari konten sumber
arxiv.org
Pertanyaan yang Lebih Dalam