The paper introduces AdaFGDA, a novel algorithm for federated minimax optimization. It provides theoretical convergence analysis and demonstrates superior performance in experiments on AUC maximization, robust neural network training, and synthetic minimax problems.
The authors propose efficient algorithms for distributed non-convex minimax optimization. They focus on adaptive learning rates to improve convergence and reduce complexity. Experimental results show the effectiveness of the proposed methods across various datasets and tasks.
Key points include the introduction of AdaFGDA for non-convex minimax optimization, theoretical convergence analysis, and superior performance in experiments. The paper highlights the importance of adaptive learning rates in improving efficiency and reducing complexity.
Til et annet språk
fra kildeinnhold
arxiv.org
Viktige innsikter hentet fra
by Feihu Huang,... klokken arxiv.org 03-01-2024
https://arxiv.org/pdf/2211.07303.pdfDypere Spørsmål