The paper introduces AdaFGDA, a novel algorithm for federated minimax optimization. It provides theoretical convergence analysis and demonstrates superior performance in experiments on AUC maximization, robust neural network training, and synthetic minimax problems.
The authors propose efficient algorithms for distributed non-convex minimax optimization. They focus on adaptive learning rates to improve convergence and reduce complexity. Experimental results show the effectiveness of the proposed methods across various datasets and tasks.
Key points include the introduction of AdaFGDA for non-convex minimax optimization, theoretical convergence analysis, and superior performance in experiments. The paper highlights the importance of adaptive learning rates in improving efficiency and reducing complexity.
다른 언어로
소스 콘텐츠 기반
arxiv.org
더 깊은 질문