The content discusses the problem of learning approximations to smooth, high-dimensional functions from finite data. Key points:
Motivation: This problem arises in parametric models and computational uncertainty quantification, where the target function represents a quantity of interest that depends on many parameters.
Function class: The target functions are assumed to be (b,ε)-holomorphic, meaning they admit holomorphic extensions to certain complex regions in the parameter space. This class captures the smoothness of many parametric PDE solutions.
Benchmark: The best s-term polynomial approximation provides a theoretical benchmark for the approximation error, showing algebraic convergence rates that are free from the curse of dimensionality.
Limits of learnability: It is shown that no learning method can achieve the best s-term approximation rates from finite data, highlighting a fundamental gap between approximation theory and practical learning.
Sparse polynomial learning: A weighted sparse polynomial approximation method is described that achieves near-optimal learning rates, bridging this gap.
Deep neural networks: Existence theorems for DNN approximations are reviewed, and a "practical existence theory" is developed to show that certain DNN architectures and training strategies can also achieve near-optimal learning rates.
A otro idioma
del contenido fuente
arxiv.org
Ideas clave extraídas de
by Ben Adcock,S... a las arxiv.org 04-08-2024
https://arxiv.org/pdf/2404.03761.pdfConsultas más profundas