The paper presents a distributed quasi-Newton (DQN) method for solving separable multi-agent optimization problems. The key highlights are:
DQN enables a group of agents to compute an optimal solution of a separable multi-agent optimization problem by leveraging an approximation of the curvature of the aggregate objective function. Each agent computes a descent direction from its local estimate of the aggregate Hessian, obtained from quasi-Newton approximation schemes.
The authors also introduce a distributed quasi-Newton method for equality-constrained optimization (EC-DQN), where each agent takes Karush-Kuhn-Tucker-like update steps to compute an optimal solution.
The algorithms utilize a peer-to-peer communication network, where each agent communicates with its one-hop neighbors to compute a common solution.
The authors prove convergence of their algorithms to a stationary point of the optimization problem under suitable assumptions.
The empirical evaluations demonstrate the competitive convergence of DQN and EC-DQN compared to existing distributed first-order and second-order methods, especially in ill-conditioned optimization problems. DQN achieves faster computation time for convergence while requiring lower communication cost across a range of communication networks.
เป็นภาษาอื่น
จากเนื้อหาต้นฉบับ
arxiv.org
ข้อมูลเชิงลึกที่สำคัญจาก
by Ola Shorinwa... ที่ arxiv.org 09-30-2024
https://arxiv.org/pdf/2402.06778.pdfสอบถามเพิ่มเติม