Neural Tangent Kernel (NTK) is widely used to analyze overparametrized neural networks due to the famous result by (Jacot et al., 2018): in the infinite-width limit, the NTK is deterministic and constant during training. However, this result cannot explain the behavior of deep networks, since it generally does not hold if depth and width tend to infinity simultaneously. In this paper, we study the NTK of fully-connected ReLU networks with depth comparable to width. We prove that the NTK properties depend significantly on the depth-to-width ratio and the distribution of parameters at initialization. In fact, our results indicate the importance of the three phases in the hyperparameter space identified in (Poole et al., 2016): ordered, chaotic and the edge of chaos (EOC). We derive exact expressions for the NTK dispersion in the infinite-depth-and-width limit in all three phases and conclude that the NTK variability grows exponentially with depth at the EOC and in the chaotic phase but not in the ordered phase. We also show that the NTK of deep networks may stay constant during training only in the ordered phase and discuss how the structure of the NTK matrix changes during training.
翻译:由于Jacot等人(Jacot等人,2018年)的著名结果,NTK被广泛用于分析超平衡神经网络(NTK):在无限宽度的限制下,NTK具有确定性和常态性,然而,这一结果无法解释深层网络的行为,因为如果深度和宽度倾向于无限化,那么深层和宽度通常不会同时保持。在本文中,我们研究了完全连通的RELU网络的NTK。我们证明NTK的特性在很大程度上取决于深度至宽度比率和初始化时参数的分布。事实上,我们的结果表明(Poole等人,2016年):在超分立空间的三个阶段(Poole等人,2016年):定序、混乱和混乱边缘(EOC))中所确定的三个阶段的重要性。我们对所有三个阶段的NTK分布都进行了精确的表达,并得出结论,NTK的变异性在EOC的深度和混乱阶段中以指数增长,而不是在定序阶段。我们还表明,NTK的深层网络结构在不断的训练阶段可能持续进行。