Accelerating Federated Learning Convergence through Synthetic Data Shuffling under Data Heterogeneity
Shuffling a small fraction of synthetic data across clients can quadratically reduce the gradient dissimilarity and lead to a super-linear speedup in the convergence of federated learning algorithms under data heterogeneity.