Understanding the relation between deep and shallow neural networks is extremely important for the theoretical study of deep learning. In this work, we discover an embedding principle in depth that loss landscape of an NN "contains" all critical points of the loss landscapes for shallower NNs. The key tool for our discovery is the critical lifting operator proposed in this work that maps any critical point of a network to critical manifolds of any deeper network while preserving the outputs. This principle provides new insights to many widely observed behaviors of DNNs. Regarding the easy training of deep networks, we show that local minimum of an NN can be lifted to strict saddle points of a deeper NN. Regarding the acceleration effect of batch normalization, we demonstrate that batch normalization helps avoid the critical manifolds lifted from shallower NNs by suppressing layer linearization. We also prove that increasing training data shrinks the lifted critical manifolds, which can result in acceleration of training as demonstrated in experiments. Overall, our discovery of the embedding principle in depth uncovers the depth-wise hierarchical structure of deep learning loss landscape, which serves as a solid foundation for the further study about the role of depth for DNNs.
翻译:了解深层和浅层神经网络之间的关系对于深层学习的理论研究极为重要。 在这项工作中,我们发现一个深层的内嵌原则,即“NN”“包含”浅层NN损失地貌的所有临界点。我们发现的关键工具是这项工作中建议的关键升降操作器,该操作器将网络的任何临界点绘制到任何深层网络的关键方位,同时保存输出结果。该原则为DNN的许多广泛观察到的行为提供了新的洞察力。关于深层网络的简单训练,我们表明,可以将NNN的最低地方级提升到更深层NNN的严格支撑点。关于批量正常化的加速效应,我们证明批量正常化有助于通过抑制层线化而避免浅层NNNN的临界点。我们还证明,增加培训数据会压缩任何升起的关键方位,从而加速实验中显示的培训。总体而言,我们在深层中发现嵌入原则揭示了深层学习损失地貌的深度等级结构,这可作为进一步研究DNNNNN的深度作用的坚实基础。