The ``Neural Tangent Kernel'' (NTK) (Jacot et al 2018), and its empirical variants have been proposed as a proxy to capture certain behaviors of real neural networks. In this work, we study NTKs through the lens of scaling laws, and demonstrate that they fall short of explaining important aspects of neural network generalization. In particular, we demonstrate realistic settings where finite-width neural networks have significantly better data scaling exponents as compared to their corresponding empirical and infinite NTKs at initialization. This reveals a more fundamental difference between the real networks and NTKs, beyond just a few percentage points of test accuracy. Further, we show that even if the empirical NTK is allowed to be pre-trained on a constant number of samples, the kernel scaling does not catch up to the neural network scaling. Finally, we show that the empirical NTK continues to evolve throughout most of the training, in contrast with prior work which suggests that it stabilizes after a few epochs of training. Altogether, our work establishes concrete limitations of the NTK approach in understanding generalization of real networks on natural datasets.
翻译:“Neoral Tangent Kernel'”(NTK)(Jacot等人,2018年)及其经验变体被提出来作为获取真正神经网络某些行为的替代物。在这项工作中,我们通过比例法的透镜来研究NTK, 并表明它们没有能够解释神经网络一般化的重要方面。 特别是,我们展示了现实环境,其中有限宽度神经网络的数据比初始化时的相应经验型和无限NTK的指数大得多。 这显示了真实网络和NTK之间的更根本的差别, 不仅仅是几个测试精确度的百分点。 此外,我们表明,即使允许经验性NTK在固定数量的样本上接受预先训练, 其内核量的扩大并没有赶上神经网络的扩展。 最后,我们表明,在大多数培训中,经验性NTK在不断演变, 与先前的工作相比, 这表明它经过几个培训阶段后会稳定下来。 总的来说,我们的工作在理解NTK方法在了解实际数据的总体自然化方面的具体限制。