Random Fourier features provide a way to tackle large-scale machine learning problems with kernel methods. Their slow Monte Carlo convergence rate has motivated the research of deterministic Fourier features whose approximation error decreases exponentially with the number of frequencies. However, due to their tensor product structure these methods suffer heavily from the curse of dimensionality, limiting their applicability to two or three-dimensional scenarios. In our approach we overcome said curse of dimensionality by exploiting the tensor product structure of deterministic Fourier features, which enables us to represent the model parameters as a low-rank tensor decomposition. We derive a monotonically converging block coordinate descent algorithm with linear complexity in both the sample size and the dimensionality of the inputs for a regularized squared loss function, allowing to learn a parsimonious model in decomposed form using deterministic Fourier features. We demonstrate by means of numerical experiments how our low-rank tensor approach obtains the same performance of the corresponding nonparametric model, consistently outperforming random Fourier features.
翻译:随机的四维特性为解决大型机器学习内核方法问题提供了一种方法。 蒙特卡洛缓慢的趋同率激励了对确定型Fourier特性的研究,这些特性的近似误差随着频率数的增加而成倍减少。 但是,由于这些方法的发源物结构,它们严重受到维度诅咒的影响,将其适用性限制在两个或三个维度的假设中。 在我们的方法中,我们通过利用确定型Fourier特性的发源物产品结构克服了所谓的维度诅咒,这使我们能够将模型参数作为低级沙子分解法来代表。我们产生了一个单元相趋一致的组合式基底基底运算法,在样本大小和正常的平方位损失函数输入的维度方面都具有线性复杂性,从而能够利用确定型四维的特性来学习分解的模型。我们通过数字实验来证明我们低级的三压法方法如何获得相应的非对称模型的相同性,持续超标的四四维特性。