Khái niệm cốt lõi
The core message of this paper is to establish the mean square consistency of an online regularized learning algorithm in reproducing kernel Hilbert space (RKHS) with dependent and non-stationary data streams. The authors introduce the concept of random Tikhonov regularization path, and show that if the regularization path is slowly time-varying, then the output of the algorithm is consistent with the regularization path in mean square. Furthermore, if the data streams satisfy the RKHS persistence of excitation condition, then the output of the algorithm is consistent with the unknown function in mean square.
Tóm tắt
The paper studies the convergence of recursive regularized learning algorithms in the reproducing kernel Hilbert space (RKHS) with dependent and non-stationary online data streams.
Key highlights:
The authors introduce the concept of random Tikhonov regularization path, which involves randomly time-varying operators induced by the input data. This reframes the statistical learning problem as an ill-posed inverse problem with randomly time-varying forward operators.
The authors investigate the mean square asymptotic stability of two types of random difference equations in RKHS, where the non-homogeneous terms are respectively a martingale difference sequence and the drifts of the regularization paths.
The authors show that if the random Tikhonov regularization path is slowly time-varying, then the tracking error between the output of the algorithm and the regularization path tends to zero in mean square.
The authors introduce the RKHS persistence of excitation (PE) condition, which ensures that the random regularization path can approximate the unknown function. They prove that if the random regularization path is slowly time-varying and the data stream satisfies the RKHS PE condition, then the output of the algorithm is consistent with the unknown function in mean square.
For independent and non-identically distributed online data streams, the authors show that the algorithm achieves mean square consistency if the data-induced marginal probability measures are slowly time-varying and the average measure has a uniformly strictly positive lower bound, without the convergence assumption on the marginal probability measures and the priori information of the unknown function.