Core Concepts
Random feature ridge regression exhibits a trade-off between approximation and generalization power in high-dimensional polynomial scaling.
Abstract
Recent advances in machine learning challenge traditional notions of model complexity and generalization capabilities. The study investigates random feature ridge regression (RFRR) in the high-dimensional polynomial scaling regime, revealing insights into the impact of parametrization on model performance. RFRR shows a trade-off between approximation and generalization, with distinct behaviors based on the relationship between the number of parameters and sample size. The analysis provides a comprehensive understanding of RFRR's test error behavior under different parametrization scenarios.
Stats
p ≳ √n random features are sufficient to match kernel ridge regression performance.
For optimal test error, overparametrization is needed until p/n → ∞.
Taking λ → 0+ achieves optimal test error in critical regimes κ1 = κ2.