Random Fourier features provide a way to tackle large-scale machine learning problems with kernel methods. Their slow Monte Carlo convergence rate has motivated the research of deterministic Fourier features whose approximation error can decrease exponentially in the number of basis functions. However, due to their tensor product extension to multiple dimensions, these methods suffer heavily from the curse of dimensionality, limiting their applicability to one, two or three-dimensional scenarios. In our approach we overcome said curse of dimensionality by exploiting the tensor product structure of deterministic Fourier features, which enables us to represent the model parameters as a low-rank tensor decomposition. We derive a monotonically converging block coordinate descent algorithm with linear complexity in both the sample size and the dimensionality of the inputs for a regularized squared loss function, allowing to learn a parsimonious model in decomposed form using deterministic Fourier features. We demonstrate by means of numerical experiments how our low-rank tensor approach obtains the same performance of the corresponding nonparametric model, consistently outperforming random Fourier features.
翻译:随机的Fourier地貌为解决大型机器学习内核方法问题提供了一种方法。它们的缓慢的Monte Carlo趋同率激励了对确定性Fourier地貌的研究,其近似误差可能会使基础功能的数量成倍减少。然而,由于这些方法具有多个维度,因此由于其高压产品扩展到多个维度,这些方法深受了维度诅咒的影响,将其适用范围限制在一、二或三维情景中。在我们的处理方法中,我们通过利用确定性Fourier地貌的强压产品结构克服了所谓的维度诅咒,这使我们能够将模型参数作为低压分解状态来代表。我们产生了一个单调的组合块,在样本大小和正常的平方位损失函数输入的维度方面,均具有线性复杂性,从而能够利用确定性四维的特征学习分解式模型。我们通过数字实验来证明我们的低压压压方法如何获得相应的非对称模型的相同性,始终比随机四维的特征。