The paper proposes an adaptive decentralized federated learning (DFL) framework that addresses the challenges of device heterogeneity in resource-constrained wireless networks. The key highlights are:
Convergence Analysis: The authors analyze the convergence of DFL with edge devices having different rounds of local training. The derived convergence bound reveals the impact of the rounds of local training on the model performance, as well as the influence of data distribution non-i.i.d. level.
Optimization Problem Formulation: The authors formulate an optimization problem that minimizes the loss function of DFL while considering energy and latency constraints on each device. This problem aims to determine the optimal number of local training rounds for each device in each iteration.
Closed-Form Solutions: By reformulating and decoupling the original problem, the authors obtain closed-form solutions for the optimal rounds of local training and an energy-saving aggregation scheme. Specifically, they propose different aggregation schemes based on the Minimum Spanning Tree (MST) algorithm and the Ring-AllReduce algorithm to address the aggregation energy cost reduction problem, under different communication conditions.
Proposed DFL Framework: The authors propose a DFL framework that jointly considers the optimized rounds of local training and the energy-saving aggregation scheme. Simulation results show that the proposed framework achieves better performance than conventional schemes with fixed rounds of local training, and consumes less energy than other traditional aggregation schemes.
Til et andet sprog
fra kildeindhold
arxiv.org
Vigtigste indsigter udtrukket fra
by Zhigang Yan,... kl. arxiv.org 04-01-2024
https://arxiv.org/pdf/2403.20075.pdfDybere Forespørgsler