For certain infinitely-wide neural networks, the neural tangent kernel (NTK) theory fully characterizes generalization, but for the networks used in practice, the empirical NTK only provides a rough first-order approximation. Still, a growing body of work keeps leveraging this approximation to successfully analyze important deep learning phenomena and design algorithms for new applications. In our work, we provide strong empirical evidence to determine the practical validity of such approximation by conducting a systematic comparison of the behavior of different neural networks and their linear approximations on different tasks. We show that the linear approximations can indeed rank the learning complexity of certain tasks for neural networks, even when they achieve very different performances. However, in contrast to what was previously reported, we discover that neural networks do not always perform better than their kernel approximations, and reveal that the performance gap heavily depends on architecture, dataset size and training task. We discover that networks overfit to these tasks mostly due to the evolution of their kernel during training, thus, revealing a new type of implicit bias.
翻译:对于某些无限宽度的神经网络来说,神经相切内核(NTK)理论充分体现了一般化的特点,但对于实际使用的网络来说,经验性NTK只提供了粗略的第一阶近似值。不过,越来越多的工作不断利用这一近似值来成功分析重要的深层学习现象和设计新应用的算法。在我们的工作中,我们提供了强有力的经验证据,通过对不同神经网络的行为及其对不同任务的线性近似值进行系统比较来确定这种近似的实际有效性。我们表明线性近似确实可以将神经网络的某些任务的学习复杂性排序,即使它们达到非常不同的性能。然而,与以前报告的情况相反,我们发现神经网络的表现并不总是比它们的内核近似值更好,并揭示性差在很大程度上取决于结构、数据集大小和培训任务。我们发现网络与这些任务相比,主要由于在训练期间内核的演变,因此暴露了一种新的隐含的偏差。