核心概念
Selecting beneficial local gradients accelerates Federated Learning convergence.
摘要
The content discusses the BHerd strategy for accelerating Federated Learning (FL) convergence by selecting beneficial local gradients. It addresses the challenges posed by Non-IID datasets in FL systems and proposes a method to mitigate their effects. The paper outlines experiments conducted on different datasets and models, demonstrating the effectiveness of the BHerd strategy in improving model convergence.
Introduction to Federated Learning and its challenges.
Proposal of the BHerd strategy for selecting beneficial local gradients.
Experiments conducted on various datasets and models to validate the strategy.
Comparison with baseline approaches like Centralized SGD, FedAvg, GraB-FedAvg, FedNova, and SCAFFOLD.
Sensitivity analysis of hyperparameters like alpha, epochs per round, batch size, and number of clients.
Distribution of selected local gradients and their distance from the mean gradient.
統計資料
We set |D| = 6 × 10^4, B = 100, E = 1.
Total round T = 500 with learning rate η = 1 × 10^-4.
引述
"In pursuit of this subset, a reliable approach involves determining a measure of validity to rank the samples within the dataset."
"Our BHerd strategy is effective in selecting beneficial local gradients to mitigate the effects brought by the Non-IID dataset."