The paper introduces the ADBB algorithm for distributed optimization over multi-agent systems. It addresses the challenges of unbalanced directed networks by utilizing row-stochastic weight matrices. The method aims to achieve faster convergence and reduce computational costs while ensuring privacy preservation. By establishing contraction relationships between consensus error, optimality gap, and gradient tracking error, ADBB is proven to converge linearly to the globally optimal solution. The theoretical analysis is supported by simulations using real-world data sets.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Jinhui Hu,Xi... at arxiv.org 02-29-2024
https://arxiv.org/pdf/2305.11469.pdfDeeper Inquiries