The content discusses the properties and applications of differentiable neural networks activated by Rectified Power Unit (RePU) functions. Key highlights:
Partial derivatives of RePU networks can be represented by mixed-RePU activated networks. Upper bounds are derived for the complexity of the function class of derivatives of RePU networks.
Novel approximation results are established for simultaneously approximating Cs smooth functions and their derivatives using RePU networks. The approximation can be improved when the data or target function has a low-dimensional structure, mitigating the curse of dimensionality.
The approximation power of RePU networks for multivariate polynomials is analyzed, with explicit network architectures provided. This enables the simultaneous approximation of smooth functions and their derivatives.
The approximation results are applied to study the statistical learning theory of deep score matching estimator (DSME) using RePU networks. DSME can mitigate the curse of dimensionality when the data has low-dimensional support.
A penalized deep isotonic regression (PDIR) approach using RePU networks is proposed, which encourages the partial derivatives of the estimated regression function to be nonnegative. PDIR achieves minimax optimal convergence rates and is robust to model misspecification.
إلى لغة أخرى
من محتوى المصدر
arxiv.org
الرؤى الأساسية المستخلصة من
by Guohao Shen,... في arxiv.org 04-23-2024
https://arxiv.org/pdf/2305.00608.pdfاستفسارات أعمق