Core Concepts
A communication-efficient federated learning algorithm that adaptively compresses local models based on predicted client bandwidth to improve overall efficiency under dynamic network conditions.
Abstract
The paper proposes a communication-efficient federated learning algorithm called AdapComFL that addresses the challenges of dynamic and heterogeneous client bandwidth in federated learning.
Key highlights:
Each client predicts its own bandwidth based on collected data and adaptively compresses its local model gradient accordingly before uploading to the server.
The paper improves the traditional sketch compression mechanism by fixing the number of columns but elastically adjusting the number of rows based on the predicted bandwidth. This helps maintain accuracy while reducing the upload data volume.
The server aggregates the sketch models of different sizes by first aligning the sizes, then linearly accumulating and calculating the row-wise averages.
Experiments on real bandwidth data and benchmark datasets show that AdapComFL achieves more efficient communication compared to existing algorithms like FedAvg and SketchFL, while maintaining competitive model accuracy.
Stats
The average real bandwidth (Raw BW) and predicted bandwidth (Pre BW) for each client are similar, with an error around 0.5 MB.
Quotes
"To achieve the communication efficiency of federated learning, there are two categories of approaches: (1) reducing the frequency of total communication by increasing the amount of local computation, (2) reducing the volume of messages each round of communication."
"Remarkably, existing methods of achieving communication efficiency in federated learning ignore two problems. First, the network state of each client changes dynamically, as shown by changes in bandwidth. Second, bandwidth of clients is different."