Stochastic gradient methods (SGMs) are the predominant approaches to train deep learning models. The adaptive versions (e.g., Adam and AMSGrad) have been extensively used in practice, partly because they achieve faster convergence than the non-adaptive versions while incurring little overhead. On the other hand, asynchronous (async) parallel computing has exhibited significantly higher speed-up over its synchronous (sync) counterpart. Async-parallel non-adaptive SGMs have been well studied in the literature from the perspectives of both theory and practical performance. Adaptive SGMs can also be implemented without much difficulty in an async-parallel way. However, to the best of our knowledge, no theoretical result of async-parallel adaptive SGMs has been established. The difficulty for analyzing adaptive SGMs with async updates originates from the second moment term. In this paper, we propose an async-parallel adaptive SGM based on AMSGrad. We show that the proposed method inherits the convergence guarantee of AMSGrad for both convex and non-convex problems, if the staleness (also called delay) caused by asynchrony is bounded. Our convergence rate results indicate a nearly linear parallelization speed-up if $\tau=o(K^{\frac{1}{4}})$, where $\tau$ is the staleness and $K$ is the number of iterations. The proposed method is tested on both convex and non-convex machine learning problems, and the numerical results demonstrate its clear advantages over the sync counterpart and the async-parallel nonadaptive SGM.
翻译:沙丘梯度方法(SGM)是培养深层学习模型的主要方法。适应性版本(例如亚当和AMSGrad)在实践中被广泛使用,部分原因是它们比非适应性版本更快地趋同,但很少引起间接成本。另一方面,非同步(合成)平行计算比同步(同步)对口的同步(同步)更新速度要快得多。Async-parllel非适应性 SGM在文献中从理论和实践性能的角度进行了很好地研究。适应性SGM也可以在非平行方式中实施,而不会遇到很大困难。然而,就我们所知的最好而言,AMSGald $4 的趋同同步调整 SgMGM没有理论结果。 与同步(同步) 同步更新的 SGM 相比, 分析适应性 SGM 难度要大得多。 我们在此文件中建议, 以 AMSGM $( 美元) 的不适应性适应性调整 SGGM 和 直径( 双向直径) 演示了 的不趋同步( 直角) 和直径同步的 直角 的结果。