The paper presents a new Federated Learning with Adjusted leaRning ratE (FLARE) framework to address the challenges of device and data heterogeneity in wireless federated learning (WFL). The key idea is to enable the participating devices to adjust their individual learning rates and local training iterations, adapting to their instantaneous computing powers.
The authors establish a general convergence analysis of FLARE under non-convex models with non-i.i.d. datasets and imbalanced computing powers. By minimizing the derived convergence upper bound, they further optimize the scheduling of FLARE to exploit the channel heterogeneity. A nested problem structure is revealed to facilitate iteratively allocating the bandwidth with binary search and selecting devices with a new greedy method. A linear problem structure is also identified, and a low-complexity linear programming scheduling policy is designed when training models have large Lipschitz constants.
Experiments demonstrate that FLARE consistently outperforms the baselines in test accuracy and converges much faster with the proposed scheduling policy, under both i.i.d. and non-i.i.d. data distributions, as well as uniform and non-uniform device selection.
เป็นภาษาอื่น
จากเนื้อหาต้นฉบับ
arxiv.org
ข้อมูลเชิงลึกที่สำคัญจาก
by Bingnan Xiao... ที่ arxiv.org 04-24-2024
https://arxiv.org/pdf/2404.14811.pdfสอบถามเพิ่มเติม