A theoretical, and potentially also practical, problem with stochastic gradient descent is that trajectories may escape to infinity. In this note, we investigate uniform boundedness properties of iterates and function values along the trajectories of the stochastic gradient descent algorithm and its important momentum variant. Under smoothness and $R$-dissipativity of the loss function, we show that broad families of step-sizes, including the widely used step-decay and cosine with (or without) restart step-sizes, result in uniformly bounded iterates and function values. Several important applications that satisfy these assumptions, including phase retrieval problems, Gaussian mixture models and some neural network classifiers, are discussed in detail.
翻译:在理论上,而且可能也是实际的,与随机梯度梯度下降有关的问题是,轨迹可能逃脱至无限。在本说明中,我们调查沿随机梯度梯度梯度下降算法轨道及其重要动因变体的迭代和功能值的统一界限特性。在损失函数的平滑和差异性下,我们表明,跨行大小的大家庭,包括广泛使用的步式decay和带(或没有)重新开动步尺的共弦和弦,导致一致的迭代体和功能值。我们详细讨论了满足这些假设的若干重要应用,包括阶段检索问题、高斯混合模型和一些神经网络分类器。