Core Concepts
Windowed Anderson Acceleration improves convergence rates for symmetric fixed-point iterations.
Abstract
The paper explores the efficacy of the windowed Anderson acceleration (AA) algorithm for fixed-point methods, showcasing improved convergence rates for linear and symmetric operators. The study establishes the root-linear convergence factor enhancement over fixed-point iterations, with simulations validating the findings. Various applications and comparisons with standard fixed-point methods are discussed, emphasizing the superiority of AA for Tyler’s M-estimation.
Introduction
Investigates Anderson acceleration (AA) for fixed-point problems.
Relationship to Pulay mixing, nonlinear GMRES, and quasi-Newton methods.
Recent interest in AA applications.
Main Results
Linear Symmetric Operators: Theorem 1 establishes the convergence rate of AA(m) for linear symmetric operators.
Nonlinear Operator: Theorem 4 proves the local convergence rate of modified AA(m) for nonlinear operators.
Simulations
Data Model 1: Verification of theoretical results for TME.
Data Model 2: Comparison of computational efficiency between AA(m) and standard TME fixed-point iteration.
Comparison of AA Variants
Full-memory AA, Restarting AA, and Windowed AA compared for TME.
Technical Proofs
Proof of Theorem 1: Establishes the upper bound on the r-linear convergence factor of AA(m).
Proof of Proposition 2: Special case tight bound proof.
Stats
AA(m) improves convergence rates for linear and symmetric operators.
The r-linear convergence factor is bounded by the root-linear convergence factor.
Theoretical results are validated through simulations.
Quotes
"AA significantly superior to standard fixed-point methods for Tyler’s M-estimation."
"AA outperforms the fixed-point iteration in various applications."