Differentially Private Decentralized Learning with Tight Utility Bounds
The proposed PrivSGP-VR algorithm achieves a sub-linear convergence rate of O(1/√nK) under differentially private Gaussian noise, which is independent of stochastic gradient variance and exhibits linear speedup with respect to the number of nodes n. By optimizing the number of iterations K under a given privacy budget, PrivSGP-VR attains a tight utility bound matching that of server-client distributed counterparts, with an extra factor of 1/√n improvement compared to existing decentralized algorithms.