The recent discovery of the equivalence between infinitely wide neural networks (NNs) in the lazy training regime and Neural Tangent Kernels (NTKs) (Jacot et al., 2018) has revived interest in kernel methods. However, conventional wisdom suggests kernel methods are unsuitable for large samples due to their computational complexity and memory requirements. We introduce a novel random feature regression algorithm that allows us (when necessary) to scale to virtually infinite numbers of random features. We illustrate the performance of our method on the CIFAR-10 dataset.
翻译:最近发现在懒惰训练制度中无限宽的神经网络(NNs)和神经洞中枢(NTKs)(Jacot等人,2018年)之间的等同作用重新唤醒了对内核方法的兴趣,然而,传统智慧表明内核方法由于计算复杂和记忆要求,不适合大型样本。我们引入了一种新的随机特征回归算法,使我们能够(在必要时)将随机特性的规模扩大到几乎无限的数量。我们在CIFAR-10数据集中展示了我们的方法的性能。