SGD with Momentum (SGDM) is widely used for large scale optimization of machine learning problems. Yet, the theoretical understanding of this algorithm is not complete. In fact, even the most recent results require changes to the algorithm like an averaging scheme and a projection onto a bounded domain, which are never used in practice. Also, no lower bound is known for SGDM. In this paper, we prove for the first time that for any constant momentum factor, there exists a Lipschitz and convex function for which the last iterate of SGDM suffers from an error $\Omega(\frac{\log T}{\sqrt{T}})$ after $T$ steps. Based on this fact, we study a new class of (both adaptive and non-adaptive) Follow-The-Regularized-Leader-based SGDM algorithms with \emph{increasing momentum} and \emph{shrinking updates}. For these algorithms, we show that the last iterate has optimal convergence $O (\frac{1}{\sqrt{T}})$ for unconstrained convex optimization problems. Further, we show that in the interpolation setting with convex and smooth functions, our new SGDM algorithm automatically converges at a rate of $O(\frac{\log T}{T})$. Empirical results are shown as well.
翻译:带有 Momentum (SGDM) 的 SGD 和 Momentum (SGDM) 广泛用于大规模优化机器学习问题 。 然而, 对这一算法的理论理解尚未完全完成 。 事实上, 甚至最近的结果都需要改变算法, 比如平均制程和投影到一个从未实际使用的封闭域。 此外, SGDM 也没有已知的下限 。 在本文中, 我们第一次证明对于任何恒定的动因, 存在一个 Lipschitz 和 convex 函数, 而对于这种函数, SGDM 最后一个变异功能在$T$(\frac) $(Tunschqrt{T}} $($) 之后出现错误。 基于这一事实, 我们研究一种新的( 适应性和非适应性的) 和不适应性的 SGDDM 算法( ), 展示了新的结果( 在 ASqlAx 中, 展示了我们 的正统化结果 。