The paper presents new algorithms for solving the $\ell_{\infty}$-regression problem, which aims to find a vector $\mathbf{x}$ that minimizes the maximum absolute error $|C\mathbf{x} - \mathbf{d}|_\infty$, where $C$ is a matrix and $\mathbf{d}$ is a vector.
The key algorithmic techniques used are:
Acceleration via width reduction in multiplicative weights update (MWU) methods. Prior work achieved $\tilde{O}(n^{1/3})$ iteration complexity, but the per-iteration cost was high.
Inverse maintenance data structures to reduce the per-iteration cost by lazily updating the weights. Prior work achieved $\tilde{O}(n^{2+1/6})$ runtime using only inverse maintenance.
Sketching techniques to further reduce the per-iteration cost, at the expense of using a non-monotone MWU update which is more challenging to analyze.
The paper makes the following contributions:
A deterministic algorithm that combines acceleration and lazy inverse updates, achieving a runtime of $\tilde{O}(n^{2+1/12})$. This improves on the previous best deterministic runtime of $\tilde{O}(n^{2+1/6})$.
A randomized algorithm that combines acceleration, lazy inverse updates, and sketching, achieving a runtime of $\tilde{O}(n^{2+1/22.5})$. This is the first algorithm to efficiently combine all three key techniques.
The technical novelties include new notions of stability (ℓ3-stability) and a more sophisticated width reduction scheme to handle the non-monotone MWU updates required for sketching. The paper also provides a tighter analysis of the sketching error compared to prior work.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Deeksha Adil... at arxiv.org 10-01-2024
https://arxiv.org/pdf/2409.20030.pdfDeeper Inquiries