The paper introduces AdaFGDA, a novel algorithm for federated minimax optimization. It provides theoretical convergence analysis and demonstrates superior performance in experiments on AUC maximization, robust neural network training, and synthetic minimax problems.
The authors propose efficient algorithms for distributed non-convex minimax optimization. They focus on adaptive learning rates to improve convergence and reduce complexity. Experimental results show the effectiveness of the proposed methods across various datasets and tasks.
Key points include the introduction of AdaFGDA for non-convex minimax optimization, theoretical convergence analysis, and superior performance in experiments. The paper highlights the importance of adaptive learning rates in improving efficiency and reducing complexity.
toiselle kielelle
lähdeaineistosta
arxiv.org
Tärkeimmät oivallukset
by Feihu Huang,... klo arxiv.org 03-01-2024
https://arxiv.org/pdf/2211.07303.pdfSyvällisempiä Kysymyksiä