In recent years, functional linear models have attracted growing attention in statistics and machine learning, with the aim of recovering the slope function or its functional predictor. This paper considers online regularized learning algorithm for functional linear models in reproducing kernel Hilbert spaces. Convergence analysis of excess prediction error and estimation error are provided with polynomially decaying step-size and constant step-size, respectively. Fast convergence rates can be derived via a capacity dependent analysis. By introducing an explicit regularization term, we uplift the saturation boundary of unregularized online learning algorithms when the step-size decays polynomially, and establish fast convergence rates of estimation error without capacity assumption. However, it remains an open problem to obtain capacity independent convergence rates for the estimation error of the unregularized online learning algorithm with decaying step-size. It also shows that convergence rates of both prediction error and estimation error with constant step-size are competitive with those in the literature.
翻译:近年来,功能线性模型在统计和机器学习中引起越来越多的注意,目的是恢复斜坡功能或其功能预测器。本文考虑了在复制核心Hilbert空间时功能线性模型的在线正规化学习算法。对超常预测错误和估计错误的一致分析分别以多元化的步进尺寸和恒定的步进尺寸为基础提供。可以通过能力依赖分析得出快速趋同率。通过引入明确的正规化术语,我们提高了非常规化在线学习算法的饱和度。当跨级衰减时,我们提升了非常规化在线学习算法的饱和度,并在没有能力假设的情况下确定了估算错误的快速趋同率。然而,获得能力独立的整合率以估算非常规在线学习算法的误差和累进尺寸衰率。它还表明,预测错误和持续步进尺寸估计错误的趋同率与文献中的误差具有竞争力。