The paper proposes a novel federated learning framework called Federated Sparse Gradient Congruity (FedSGC) that combines dynamic sparse training and gradient congruity inspection to address the challenges of high computational and communication costs, as well as poor generalization performance, in federated learning.
The key idea is to leverage the concept of gradient congruity, where neurons with associated gradients that have conflicting directions with respect to the global model are pruned, as they are less likely to contain generalized information. Conversely, neurons with gradients that are consistent with the global model's learning direction are prioritized for regrowth.
This prune-and-grow mechanism guided by gradient congruity allows FedSGC to significantly reduce the local computation and communication overheads while enhancing the generalization abilities of the federated learning model. The authors evaluate FedSGC on MNIST and CIFAR-10 datasets under challenging non-IID settings and show that it outperforms state-of-the-art federated learning methods in terms of accuracy, convergence speed, and communication efficiency.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Chris Xing T... at arxiv.org 05-03-2024
https://arxiv.org/pdf/2405.01189.pdfDeeper Inquiries