Adam is a widely used stochastic optimization method for deep learning applications. While practitioners prefer Adam because it requires less parameter tuning, its use is problematic from a theoretical point of view since it may not converge. Variants of Adam have been proposed with provable convergence guarantee, but they tend not be competitive with Adam on the practical performance. In this paper, we propose a new method named Adam$^+$ (pronounced as Adam-plus). Adam$^+$ retains some of the key components of Adam but it also has several noticeable differences: (i) it does not maintain the moving average of second moment estimate but instead computes the moving average of first moment estimate at extrapolated points; (ii) its adaptive step size is formed not by dividing the square root of second moment estimate but instead by dividing the root of the norm of first moment estimate. As a result, Adam$^+$ requires few parameter tuning, as Adam, but it enjoys a provable convergence guarantee. Our analysis further shows that Adam$^+$ enjoys adaptive variance reduction, i.e., the variance of the stochastic gradient estimator reduces as the algorithm converges, hence enjoying an adaptive convergence. We also propose a more general variant of Adam$^+$ with different adaptive step sizes and establish their fast convergence rate. Our empirical studies on various deep learning tasks, including image classification, language modeling, and automatic speech recognition, demonstrate that Adam$^+$ significantly outperforms Adam and achieves comparable performance with best-tuned SGD and momentum SGD.
翻译:亚当是一个广泛使用的深层学习应用的随机优化方法。 执业者更喜欢亚当,因为它要求的参数调整较少, 但它的使用从理论上看是有问题的, 因为它可能不会趋同。 亚当的变体已经提出, 并得到了可证实的趋同保证, 但是在实际的性能方面, 与亚当相比, 它们往往没有竞争力。 在本文中, 我们提议了一个名为 Adam$ $ 的新方法( 以宣布为亚当++ ) 。 Adam 保留了亚当的一些关键成分, 但它也有一些显著的差异 :(一) 它不维持第二时刻估计的移动平均值, 而是在外推点计算第一次估计的动平均数;(二) 它的适应性步骤规模不是通过分解第二时刻估计的平方根, 而是通过分解第一时刻估计的根。 因此, Adam $ 需要很少参数调整, 因为亚当, 但是它享有一个可证实的趋同保证。 我们的分析进一步表明, Adam 美元 具有适应性调整性调调调调差, 美元 调调调调调调调调调调调调调调调调调调的调调调调调调调调调和调整的调整的调整的调整的调整的调和调和调整的调和调和调和调和调和调整的调整的调整的调整的调和调整的调整的调整的调整的调整的调整的公式。