The key highlights and insights of the content are:
The authors consider networked optimization problems with separable convex objective functions and coupling multi-dimensional constraints in the form of both equalities and inequalities.
They reformulate the problem by introducing auxiliary decision variables together with a network-dependent linear mapping to each coupling constraint. This reformulation enables the decomposition of the problem, making it amenable to distributed solutions.
The reformulated problem is approached as a min-min optimization scenario, where the auxiliary and primal variables are optimized separately. The authors show that the gradients of the objective function in the outer minimization are network-dependent affine transformations of Karush-Kuhn-Tucker (KKT) multipliers of the inner problem under mild conditions, and can be locally computed by agents.
For strongly convex objectives, the authors leverage the Lipschitz continuity of the gradients to develop an accelerated distributed optimization algorithm with convergence rate guarantees. For general convex objectives, they impose additional coordinate constraints on the auxiliary variables to ensure the boundedness of the gradients, and develop a gradient descent-based algorithm.
The proposed algorithms produce violation-free solutions whenever they are terminated, while also converging to precise solutions with an explicit rate guarantee. This is in contrast to most existing distributed optimization algorithms that only have asymptotic feasibility guarantee.
The authors apply the proposed algorithm to implement a control barrier function based controller in a distributed manner, and the results verify its effectiveness.
Sang ngôn ngữ khác
từ nội dung nguồn
arxiv.org
Thông tin chi tiết chính được chắt lọc từ
by Changxin Liu... lúc arxiv.org 04-12-2024
https://arxiv.org/pdf/2404.07609.pdfYêu cầu sâu hơn