The content discusses the challenges of implementing federated learning (FL) efficiently in realistic edge systems due to system and statistical heterogeneity. To address these challenges, the authors investigate a two-tier HFEL system, where edge devices are connected to edge servers and edge servers are interconnected through peer-to-peer (P2P) edge backhauls.
The authors formulate an optimization problem to minimize the total training latency by allocating the computation and communication resources, as well as adjusting the P2P connections. To ensure convergence under dynamic topologies, they analyze the convergence error bound and introduce a model consensus constraint into the optimization problem.
The proposed problem is then decomposed into several subproblems, enabling the authors to alternatively solve it online. Their method, dubbed FedRT, facilitates the efficient implementation of large-scale FL at edge networks under data and system heterogeneity.
The authors conduct comprehensive experiments on three benchmark datasets (CIFAR-10, FEMNIST, and FMNIST) under various data distributions and resource configurations. The results demonstrate that FedRT outperforms baselines in terms of total training latency and convergence speed while maintaining model accuracy.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Zhidong Gao,... at arxiv.org 10-01-2024
https://arxiv.org/pdf/2409.19509.pdfDeeper Inquiries