We discover restrained numerical instabilities in current training practices of deep networks with SGD. We show numerical error (on the order of the smallest floating point bit) induced from floating point arithmetic in training deep nets can be amplified significantly and result in significant test accuracy variance, comparable to the test accuracy variance due to stochasticity in SGD. We show how this is likely traced to instabilities of the optimization dynamics that are restrained, i.e., localized over iterations and regions of the weight tensor space. We do this by presenting a theoretical framework using numerical analysis of partial differential equations (PDE), and analyzing the gradient descent PDE of a simplified convolutional neural network (CNN). We show that it is stable only under certain conditions on the learning rate and weight decay. We reproduce the localized instabilities in the PDE for the simplified network, which arise when the conditions are violated.
翻译:我们发现目前使用 SGD 进行深网的深网络培训的做法存在有限的数字不稳定性。 在深网培训中,我们从浮动点算术中发现的数字错误(按最小浮点位的顺序排列)可以大大放大,并导致显著的测试精确性差异,与SGD 的随机性测试精确性差异相当。我们表明,这有可能追溯到受限制的优化动态的不稳定性,即,在重拉空间的迭代和区域内的本地化。我们这样做的方式是,利用部分差异方程的数值分析(PDE)提出理论框架,分析简化的卷发神经网络的梯度下降 PDE 。我们显示,只有在学习率和重量衰减的某些条件下,它才会稳定。我们复制了PDE 中简化网络的局部不稳定性,当条件被破坏时,就会产生这种不稳定性。