This article provides convergence analysis of online stochastic gradient descent algorithms for functional linear models. Adopting the characterizations of the slope function regularity, the kernel space capacity, and the capacity of the sampling process covariance operator, significant improvement on the convergence rates is achieved. Both prediction problems and estimation problems are studied, where we show that capacity assumption can alleviate the saturation of the convergence rate as the regularity of the target function increases. We show that with properly selected kernel, capacity assumptions can fully compensate for the regularity assumptions for prediction problems (but not for estimation problems). This demonstrates the significant difference between the prediction problems and the estimation problems in functional data analysis.
翻译:本条对功能线性模型的在线随机梯度梯度下降算法进行了趋同分析。采用了斜坡函数常态、内核空间能力以及取样过程共变操作能力的特点,从而大大改进了趋同率。对预测问题和估算问题进行了研究,研究结果表明,随着目标功能的正常性提高,能力假设可以减轻趋同率的饱和。我们表明,通过适当选择内核,能力假设可以充分弥补预测问题(但不包括估计问题)的常态假设。这显示了预测问题与功能数据分析中的估计问题之间的巨大差别。