Core Concepts
The author presents Push-LSVRG-UP as a distributed stochastic optimization algorithm for large-scale convex finite-sum optimization problems over unbalanced directed networks, emphasizing accelerated linear convergence and reduced computational complexity.
Abstract
The paper introduces Push-LSVRG-UP, a novel distributed stochastic optimization algorithm for resolving large-scale optimization problems over unbalanced directed networks. It focuses on achieving accelerated linear convergence, reducing storage costs, and improving computational efficiency compared to existing methods. The algorithm incorporates an uncoordinated probabilistic triggered mechanism to enhance agent independence and flexibility in computing local batch gradients.
Key points include:
Introduction of Push-LSVRG-UP for large-scale convex finite-sum optimization.
Utilization of push-sum technique and LSVRG method with uncoordinated triggered probabilities.
Emphasis on accelerated linear convergence, reduced storage costs, and lower computational complexity.
Theoretical analysis providing step-size range, convergence rate, and iteration complexity.
Superiority of Push-LSVRG-UP demonstrated through simulations on real-world datasets.
Stats
Each agent performs only local computation without leaking private information.
Explicit feasible range of the constant step-size provided.
Linear convergence rate achieved by Push-LSVRG-UP when obtaining the globally optimal solution.
Quotes
"Push-LSVRG-UP achieves superior characteristics of accelerated linear convergence."
"The introduction of an uncoordinated probabilistic triggered mechanism allows for agent independence."