A theoretical, and potentially also practical, problem with stochastic gradient descent is that trajectories may escape to infinity. In this note, we investigate uniform boundedness properties of iterates and function values along the trajectories of the stochastic gradient descent algorithm and its important momentum variant. Under smoothness and $R$-dissipativity of the loss function, we show that broad families of step-sizes, including the widely used step-decay and cosine with (or without) restart step-sizes, result in uniformly bounded iterates and function values. Several important applications that satisfy these assumptions, including phase retrieval problems, Gaussian mixture models, and some neural network classifiers, are discussed in detail. We further extend the uniform boundedness of SGD and its momentum variant under the generalized dissipativity for the functions whose tails grow slower than quadratic functions. This includes some interesting applications, for example, Bayesian logistic regression and logistic regression with $\ell_1$ regularization.
翻译:在理论上,而且可能也是实际的,与悬浮梯度下降有关的问题是,轨迹可能逃脱至无限。在本说明中,我们调查了沿悬浮梯度梯度下降算法及其重要动因变异轨迹的迭代和功能值的统一界限特性。在损失函数的平滑和差异性下,我们显示,跨尺寸的多组,包括广泛使用的步式decay和带(或没有)重新开动步尺的相延和相延线,导致统一界限的迭代和功能值。我们详细讨论了符合这些假设的若干重要应用,包括阶段检索问题、高斯混合模型和一些神经网络分类器。我们进一步扩展了SGD及其动因尾巴慢于四位函数的普遍脱翼性而形成的动力变异等功能的统一界限。这包括一些有趣的应用,例如,Bayesian物流回归和后勤回归,以$\ell_1美元正规化。