Momentum methods are now used pervasively within the machine learning community for training non-convex models such as deep neural networks. Empirically, they out perform traditional stochastic gradient descent (SGD) approaches. In this work we develop a Lyapunov analysis of SGD with momentum (SGD+M), by utilizing a equivalent rewriting of the method known as the stochastic primal averaging (SPA) form. This analysis is much tighter than previous theory in the non-convex case, and due to this we are able to give precise insights into when SGD+M may out-perform SGD, and what hyper-parameter schedules will work and why.
翻译:目前,机器学习界广泛使用动力学方法来培训非凝固模型,如深神经网络,它们通常采用传统的随机梯度梯度下降方法。在这项工作中,我们通过对被称为随机原始平均(SPA)形式的方法进行同等的改写,对SGD(SGD+M)进行Lyapunov分析(SGD+M)。这种分析比在非凝固情况下的先前理论更加严格,因此,我们能够准确了解SGD+M(SGD+M)在什么时候可能超过SGD(SGD),以及什么超参数时间表会有效,为什么会有效。