Asynchronous momentum stochastic gradient descent algorithms (Async-MSGD) is one of the most popular algorithms in distributed machine learning. However, its convergence properties for these complicated nonconvex problems is still largely unknown, because of the current technical limit. Therefore, in this paper, we propose to analyze the algorithm through a simpler but nontrivial nonconvex problem - streaming PCA, which helps us to understand Aync-MSGD better even for more general problems. Specifically, we establish the asymptotic rate of convergence of Async-MSGD for streaming PCA by diffusion approximation. Our results indicate a fundamental tradeoff between asynchrony and momentum: To ensure convergence and acceleration through asynchrony, we have to reduce the momentum (compared with Sync-MSGD). To the best of our knowledge, this is the first theoretical attempt on understanding Async-MSGD for distributed nonconvex stochastic optimization. Numerical experiments on both streaming PCA and training deep neural networks are provided to support our findings for Async-MSGD.
翻译:分散式机器学习中最受欢迎的算法之一(Async-MSGD)是分布式机器学习中最受欢迎的算法之一。然而,由于目前的技术限制,其对于这些复杂的非convex问题的趋同特性在很大程度上仍不为人所知。因此,在本文中,我们提议通过一个简单但非边际的非convex问题来分析算法,这帮助我们更好地理解Aync-MSGD(Async-MSGD),甚至更普遍的问题。具体地说,我们建立了通过扩散近似法流出五氯苯的Async-MSGD(Async-MSGD)交融率。我们的结果表明,在无同步与动力之间有一个基本的平衡:确保通过同步和加速,我们必须减少动力(与Sync-MSGDGD相比 ) 。根据我们的知识,这是首次从理论上尝试理解Aync-MSGD(Async-MSGD)用于分布式非convex 的非conchaest 优化。我们提供了关于流式的五氯苯和培训深神经网络的数值实验,以支持Async-MSGDDDDD的发现。