The study of deep neural networks (DNNs) in the infinite-width limit, via the so-called neural tangent kernel (NTK) approach, has provided new insights into the dynamics of learning, generalization, and the impact of initialization. One key DNN architecture remains to be kernelized, namely, the recurrent neural network (RNN). In this paper we introduce and study the Recurrent Neural Tangent Kernel (RNTK), which provides new insights into the behavior of overparametrized RNNs, including how different time steps are weighted by the RNTK to form the output under different initialization parameters and nonlinearity choices, and how inputs of different lengths are treated. The ability to compare inputs of different length is a property of RNTK that should greatly benefit practitioners. We demonstrate via a synthetic and 56 real-world data experiments that the RNTK offers significant performance gains over other kernels, including standard NTKs, across a wide array of data sets.
翻译:通过所谓的神经相近内核(NTK)方法对无限宽度限制的深神经网络(DNN)的研究,对学习、一般化的动态以及初始化的影响提供了新的洞察力。一个关键的DNN结构仍有待内核化,即经常性神经网络(RNN)。在本文中,我们介绍并研究经常性神经内核(RNTK),它提供了对过度平衡的RNNT行为的新洞察力,包括RNTK如何在不同的初始化参数和非线性选择下加权不同的时间步骤来形成输出,以及如何处理不同长度的投入。比较不同长度的投入的能力是RNTK的属性,应该极大地有利于从业人员。我们通过合成和56个真实世界数据实验,证明RNTK在其他各套数据库,包括标准的NTK在内,取得了重大绩效收益。