The recipe behind the success of deep learning has been the combination of neural networks and gradient-based optimization. Understanding the behavior of gradient descent however, and particularly its instability, has lagged behind its empirical success. To add to the theoretical tools available to study gradient descent we propose the principal flow (PF), a continuous time flow that approximates gradient descent dynamics. To our knowledge, the PF is the only continuous flow that captures the divergent and oscillatory behaviors of gradient descent, including escaping local minima and saddle points. Through its dependence on the eigendecomposition of the Hessian the PF sheds light on the recently observed edge of stability phenomena in deep learning. Using our new understanding of instability we propose a learning rate adaptation method which enables us to control the trade-off between training stability and test set evaluation performance.
翻译:深层学习成功背后的秘诀是神经网络和梯度优化的结合。但是,理解梯度下降的行为,特别是其不稳定性,已经落后于其经验成功。除了现有的研究梯度下降的理论工具之外,我们提议了主要流(PF),即一种接近梯度下降动态的连续时间流。据我们所知,PF是唯一连续流,它捕捉了梯度下降的不同和杂交行为,包括逃离当地迷你和马鞍。PF依赖赫西安人的eigendecomformation,它揭示了最近观察到的深度学习中稳定现象的边缘。我们利用我们对不稳定性的新理解,提出了一种学习率适应方法,使我们能够控制培训稳定性和测试评估业绩之间的权衡。