The authors propose a new compressed decentralized stochastic gradient method, termed "compressed exact diffusion with adaptive stepsizes (CEDAS)", which achieves comparable convergence rate as centralized stochastic gradient descent (SGD) for both smooth strongly convex and smooth nonconvex objective functions under unbiased compression operators.
The authors propose two communication-efficient decentralized optimization algorithms, Compressed Push-Pull (CPP) and Broadcast-like CPP (B-CPP), that achieve linear convergence for minimizing strongly convex and smooth objective functions over general directed networks.