Kernkonzepte
The core message of this article is to provide a sharp asymptotic estimate for the expected squared error of online learning algorithms that approximate the regression function from noisy vector-valued data using a reproducing kernel Hilbert space (RKHS) as a prior.
Zusammenfassung
The article considers the problem of learning the regression function from noisy vector-valued data using an appropriate RKHS as a prior. The focus is on obtaining estimates for the expectation of the squared error norm in the RKHS of approximations to the regression function, which are built in an incremental way by online algorithms.
Key highlights:
- The authors introduce vector-valued RKHS and associated smoothness spaces, and discuss properties of the minimization problem for the regression function.
- They analyze an online learning algorithm that builds successive approximations to the regression function by processing i.i.d. samples one by one.
- Under standard assumptions on the feature map, the algorithm parameters, and the smoothness of the regression function, the authors derive a sharp asymptotic estimate for the expected squared error in the RKHS norm.
- The estimate shows that the expected squared error can be bounded by a constant times (m+1)^(-s/(2+s)), where m is the current number of processed data, and the parameter s expresses an additional smoothness assumption on the regression function.
- The proof extends earlier work on Schwarz iterative methods in the noiseless case to the more general vector-valued setting with noisy measurements.