The paper investigates the convergence performance of the classical FedAvg algorithm in the presence of arbitrary client dropouts. It is found that client dropouts introduce a biased update in each training iteration, and with the common choice of a decaying learning rate, the model learned by FedAvg may oscillate around a stationary point of the global loss function in the worst case.
To address this issue, the authors propose a novel federated learning algorithm named MimiC. The key idea of MimiC is to augment each received model update from the active clients with a correction variable derived from previous iterations, in order to mimic an imaginary central update. This correction is performed at the server side and introduces no additional computation or communication overhead to the clients.
The theoretical analysis shows that despite the modified update in MimiC still being a biased estimate of the global gradient, its divergence is bounded and diminishes to zero with a proper choice of the learning rates. The convergence of MimiC is further characterized by imposing a mild assumption on the maximum number of consecutive client dropout iterations. It is also shown that MimiC converges with a high probability when clients drop out in a probabilistic pattern.
The extensive simulation results validate the convergence of MimiC with client dropouts in different scenarios and demonstrate that MimiC consistently produces better models than the baseline methods.
לשפה אחרת
מתוכן המקור
arxiv.org
תובנות מפתח מזוקקות מ:
by Yuchang Sun,... ב- arxiv.org 04-09-2024
https://arxiv.org/pdf/2306.12212.pdfשאלות מעמיקות