The authors propose a cheap stochastic iterative method that solves the optimization problem on the random generalized Stiefel manifold without requiring expensive eigenvalue decompositions or matrix inversions. The method has lower per-iteration cost, requires only matrix multiplications, and has the same convergence rates as its Riemannian counterparts.
The proposed adaptive linearized alternating direction multiplier method improves the convergence rate of the algorithm by dynamically selecting the regular term coefficients based on the current iteration point, without compromising the convergence.
The perturbed gradient flow for the linear quadratic regulator (LQR) problem is shown to be small-disturbance input-to-state stable (ISS) under suitable conditions on the objective function.
제어 이론을 활용하여 시간 변화하는 선형 등식 및 부등식 제약 조건이 있는 온라인 최적화 문제를 해결하는 새로운 알고리즘을 제안한다. 등식 제약만 있는 경우 강건 제어를 사용하여 최적 궤적에 점근적으로 수렴하는 온라인 알고리즘을 설계하였다. 부등식 제약이 있는 경우에는 이를 처리하기 위해 anti-windup 기법을 활용하였다.
The authors prove new convergence rates for a generalized version of stochastic Nesterov acceleration under interpolation conditions. Their approach accelerates any stochastic gradient method that makes sufficient progress in expectation, and the proof applies to both convex and strongly convex functions.
The core message of this paper is to establish bounds on the violation probability of an optimal solution of the robust scenario problem for guaranteeing prescribed risk levels in chance-constrained optimization when the scenarios are generated from time-varying distributions, for both convex and non-convex feasible regions.
The recoverable robust shortest path problem with discrete recourse is Σp3-hard for the arc exclusion and arc symmetric difference neighborhoods, and the inner adversarial problem for these neighborhoods is Πp2-hard.
The authors present a first-order method for solving linear programs that achieves polynomial-time convergence rates, with the convergence rate depending on the circuit imbalance measure of the constraint matrix rather than the Hoffman constant.
Die Verbindung zwischen der Abfragekomplexität der lokalen Suche und der Mischzeit der schnellsten Misch-Markov-Kette für den gegebenen Graphen wird durch ein spezielles Theorem formal etabliert.
Ein neuartiger Ansatz zur Integration von maschinellem Lernen und Optimierung für risikobewusste Entscheidungsfindung unter Unsicherheit.