The paper considers constrained stochastic nonlinear optimization problems, where the objective function is a stochastic expectation and the constraints are deterministic equalities. To solve these problems, the authors apply the Stochastic Sequential Quadratic Programming (StoSQP) method, which can be viewed as applying a stochastic second-order Newton's method to the Karush-Kuhn-Tucker (KKT) conditions.
To reduce the dominant computational cost of the StoSQP method, the authors propose an Adaptive Inexact StoSQP (AI-StoSQP) scheme that employs an iterative sketching solver to inexactly solve the quadratic program in each iteration. Notably, the approximation error of the sketching solver need not vanish as iterations proceed, meaning that the per-iteration computational cost does not blow up.
For the AI-StoSQP method, the authors establish the following key results:
Global almost sure convergence: They show that the KKT residual converges to zero almost surely from any initialization under mild assumptions.
Asymptotic normality: They prove that the rescaled primal-dual sequence 1/√¯αt·(xt-x⋆, λt-λ⋆) converges to a mean-zero Gaussian distribution with a nontrivial covariance matrix depending on the underlying sketching distribution. This result quantifies the uncertainty inherent in the StoSQP iterates, which is crucial for performing online statistical inference.
Covariance estimation: The authors also analyze a plug-in covariance matrix estimator that can be computed in an online fashion to facilitate practical inference.
The authors illustrate the asymptotic normality result on benchmark nonlinear problems in the CUTEst test set and on linearly/nonlinearly constrained regression problems.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Sen Na,Micha... at arxiv.org 04-16-2024
https://arxiv.org/pdf/2205.13687.pdfDeeper Inquiries