Accelerating Federated Learning with Momentum-Integrated Global Model and Consistent Local Updates
The proposed FedACG algorithm improves the consistency across clients and facilitates the convergence of the server model by broadcasting a global model with a lookahead gradient, enabling clients to perform local updates along the trajectory of the global gradient. FedACG also regularizes local updates by aligning each client with the overshot global model to reduce bias and improve the stability of the algorithm.