Core Concepts
Introducing FedComLoc, a novel approach integrating compression techniques into federated learning to reduce communication costs effectively.
Abstract
The article discusses the challenges of high communication costs in Federated Learning (FL) and introduces FedComLoc as a solution. It combines compression techniques like model sparsity and quantization with efficient local training. The paper presents extensive experimental validation showing that FedComLoc significantly reduces communication costs while maintaining computational integrity. Various experiments are conducted on popular datasets like FedMNIST and FedCIFAR10 to evaluate the performance of FedComLoc in different scenarios, including sparsity ratios, data heterogeneity, quantization bits, and number of local iterations. The results demonstrate the effectiveness of FedComLoc in reducing communication overheads and improving training efficiency compared to baseline methods like FedAvg and Scaffold.
Stats
There exists c ∈ R s.t. ∥∇fi(x)∥ ≤ c for 1 ≤ i ≤ d.
There exists c ∈ R s.t. 1/n Σ ∥∇fi(x)∥² ≤ c∥∇f(x)∥².
Linear convergence has been proved when all functions fi are strongly convex.
Total costs are a combined measurement of both communication costs and local computation cost.
A communication round has unit cost while a local training round has cost τ.
Quotes
"Our primary objective is to solve the problem (ERM) and deploy the optimized global model to all clients."
"To mitigate these costs, FL often employs Local Training (LT), a strategy where local parameters are updated multiple times before aggregation."
"We proposed three variants of our algorithm addressing several key bottlenecks in FL."
"Our evaluation comprises three distinct aspects."