We discover restrained numerical instabilities in current training practices of deep networks with stochastic gradient descent (SGD). We show numerical error (on the order of the smallest floating point bit) induced from floating point arithmetic in training deep nets can be amplified significantly and result in significant test accuracy variance, comparable to the test accuracy variance due to stochasticity in SGD. We show how this is likely traced to instabilities of the optimization dynamics that are restrained, i.e., localized over iterations and regions of the weight tensor space. We do this by presenting a theoretical framework using numerical analysis of partial differential equations (PDE), and analyzing the gradient descent PDE of convolutional neural networks (CNNs). We show that it is stable only under certain conditions on the learning rate and weight decay. We show that rather than blowing up when the conditions are violated, the instability can be restrained. We show this is a consequence of the non-linear PDE associated with the gradient descent of the CNN, whose local linearization changes when over-driving the step size of the discretization, resulting in a stabilizing effect. We link restrained instabilities to the recently discovered Edge of Stability (EoS) phenomena, in which the stable step size predicted by classical theory is exceeded while continuing to optimize the loss and still converging. Because restrained instabilities occur at the EoS, our theory provides new predictions about the EoS, in particular, the role of regularization and the dependence on the network complexity.
翻译:我们发现,在深网训练中,从浮动点算术引出的数值错误(最小浮点位的顺序)可以大幅放大,并导致测试准确性差异,与SGD的随机性测试准确性差异相仿。我们发现,这很可能追溯到限制的优化动态的不稳定性,即,在迭代和重控空间区域上,局部偏差的局部偏差和区域上位的偏差性。我们这样做的方式是利用部分偏差方(PDE)的数值分析来提出理论框架,并分析富集神经网络(CNNs)的梯度偏差性偏差 PDE。我们显示,只有在学习速度和重量衰减的某些条件下,它才具有显著的测试性准确性差异性差异。我们表明,这种情况可能如何追溯到受限制的优化动态的不稳定性,也就是说,随着CNN的渐渐渐渐脱落,当局部偏差度的上升时,局部的线性变化正在发生,从而形成一种稳定的理论,我们把稳定性稳定的网络与稳定的递增幅度联系起来。