The effectiveness of non-parametric, kernel-based methods for function estimation comes at the price of high computational complexity, which hinders their applicability in adaptive, model-based control. Motivated by approximation techniques based on sparse spectrum Gaussian processes, we focus on models given by regularized trigonometric linear regression. This paper provides an analysis of the performance of such an estimation set-up within the statistical learning framework. In particular, we derive a novel bound for the sample error in finite-dimensional spaces, accounting for noise with potentially unbounded support. Next, we study the approximation error and discuss the bias-variance trade-off as a function of the regularization parameter by combining the two bounds.
翻译:非参数、内核的功能估算方法的有效性是以高计算复杂性的价格得出的,这妨碍了这些方法在适应性、基于模型的控制中的适用性。我们受基于稀薄频谱高斯过程的近似技术的驱动,我们侧重于由常规三角线性回归提供的模型。本文分析了统计学习框架内这种估算设置的性能。特别是,我们为有限空间的抽样错误得出了一个新颖的链接,其中考虑到可能不受约束的支持的噪音。接下来,我们研究近似错误,并讨论将偏差权衡作为正规化参数的一个函数,将两个界限结合起来。</s>