Centrala begrepp
The author presents the ADBB method for distributed optimization over unbalanced directed networks, focusing on larger step-sizes and accelerated convergence.
Sammanfattning
The paper introduces the ADBB algorithm for distributed optimization over multi-agent systems. It addresses the challenges of unbalanced directed networks by utilizing row-stochastic weight matrices. The method aims to achieve faster convergence and reduce computational costs while ensuring privacy preservation. By establishing contraction relationships between consensus error, optimality gap, and gradient tracking error, ADBB is proven to converge linearly to the globally optimal solution. The theoretical analysis is supported by simulations using real-world data sets.
Statistik
Each agent in the system uses only local computation and communication.
A real-world dataset is used in simulations to validate the correctness of the theoretical analysis.
The BB method is introduced into distributed optimization over undirected networks.
ADBB can resolve optimization problems without requiring knowledge of neighbors' out-degree.
The step-sizes are calculated based on adapt-then-combine variation.