Unraveling the general structure underlying the loss landscapes of deep neural networks (DNNs) is important for the theoretical study of deep learning. Inspired by the embedding principle of DNN loss landscape, we prove in this work an embedding principle in depth that loss landscape of an NN "contains" all critical points of the loss landscapes for shallower NNs. Specifically, we propose a critical lifting operator that any critical point of a shallower network can be lifted to a critical manifold of the target network while preserving the outputs. Through lifting, local minimum of an NN can become a strict saddle point of a deeper NN, which can be easily escaped by first-order methods. The embedding principle in depth reveals a large family of critical points in which layer linearization happens, i.e., computation of certain layers is effectively linear for the training inputs. We empirically demonstrate that, through suppressing layer linearization, batch normalization helps avoid the lifted critical manifolds, resulting in a faster decay of loss. We also demonstrate that increasing training data reduces the lifted critical manifold thus could accelerate the training. Overall, the embedding principle in depth well complements the embedding principle (in width), resulting in a complete characterization of the hierarchical structure of critical points/manifolds of a DNN loss landscape.
翻译:将深度神经网络损失地貌的总体结构拆开,对于深层神经网络损失地貌的理论研究十分重要。在DNN损失地貌的嵌入原则的启发下,我们在此工作中证明一个深度的嵌入原则,即NN“封闭”所有浅层国家网络损失地貌的临界点。具体地说,我们提议一个关键的升降操作器,即浅层网络的任何临界点都可以在保存产出的同时,提升到目标网络的关键方块。通过提升,本地最小的NNN可成为更深层NN的严格支撑点,而后者很容易通过一级方法逃脱。深入嵌入原则揭示出大量临界点的组合,在其中层线化发生,即某些层的计算对培训投入是有效的线性。我们从经验上证明,通过抑制层线化,批次正常化有助于避免关键方块的升高,导致损失的加速衰减。我们还表明,增加培训数据会减少升起的关键方块块,从而可以加速整个培训。总体而言,在深度上嵌定的D级级结构结构的深度原则补充了临界的深度结构。