Neural networks trained with gradient descent can undergo non-trivial phase transitions as a function of the learning rate. In (Lewkowycz et al., 2020) it was discovered that wide neural nets can exhibit a catapult phase for super-critical learning rates, where the training loss grows exponentially quickly at early times before rapidly decreasing to a small value. During this phase the top eigenvalue of the neural tangent kernel (NTK) also undergoes significant evolution. In this work, we will prove that the catapult phase exists in a large class of models, including quadratic models and two-layer, homogenous neural nets. To do this, we show that for a certain range of learning rates the weight norm decreases whenever the loss becomes large. We also empirically study learning rates beyond this theoretically derived range and show that the activation map of ReLU nets trained with super-critical learning rates becomes increasingly sparse as we increase the learning rate.
翻译:在(Lewkowycz等人,2020年)中,人们发现,广泛的神经网可以展示超临界学习率的弹射阶段,在这种阶段,培训损失在早期迅速迅速增长,然后迅速下降到一个小值。在这一阶段,神经切核(NTK)的顶值也发生了重大演变。在这项工作中,我们将证明弹射阶段存在于一大批模型中,包括二次模型和双层、同质神经网。要做到这一点,我们将显示,对于某些学习率而言,当损失大时,重量标准会下降。我们还从经验上研究超出这一理论衍生范围的学习率,并表明随着学习率的增加,经过超临界学习率培训的RELU网的启动图越来越稀少。