Explicit Second-Order Min-Max Optimization Methods with Optimal Convergence Guarantee
The authors propose and analyze several inexact regularized Newton-type methods for finding a global saddle point of convex-concave unconstrained min-max optimization problems, achieving an order-optimal convergence rate of O(ε^(-2/3)).