Distributed Accelerated Gradient Flow for Smooth Convex Optimization with Near-Optimal Convergence Rate
The proposed distributed accelerated gradient flow algorithm achieves a convergence rate of O(1/t^(2-β)) for smooth convex optimization problems, which is near-optimal in the distributed setting.