Training the deep convolutional neural network for computer vision problems is slow and inefficient, especially when it is large and distributed across multiple devices. The inefficiency is caused by the backpropagation algorithm's forward locking, backward locking, and update locking problems. Existing solutions for acceleration either can only handle one locking problem or lead to severe accuracy loss or memory inefficiency. Moreover, none of them consider the straggler problem among devices. In this paper, we propose Layer-wise Staleness and a novel efficient training algorithm, Diversely Stale Parameters (DSP), to address these challenges. We also analyze the convergence of DSP with two popular gradient-based methods and prove that both of them are guaranteed to converge to critical points for non-convex problems. Finally, extensive experimental results on training deep learning models demonstrate that our proposed DSP algorithm can achieve significant training speedup with stronger robustness than compared methods.
翻译:为计算机视觉问题而培训深革命神经网络的过程缓慢且效率低下,特别是当其规模巨大且分布于多个装置时。效率低下是由于后向推进算法的前锁、后向锁和更新锁定问题造成的。现有的加速解决方案只能解决一个锁定问题,或者导致严重精度损失或记忆效率低下。此外,它们都没有考虑到设备之间的分解问题。在本文件中,我们建议采用从层到层的静态和新的高效培训算法,即多样化的静态参数(DSP)来应对这些挑战。我们还分析了DSP与两种流行的梯度法的趋同,并证明这两种方法都能够保证与非凝固问题的临界点汇合。最后,关于深层学习模型培训的广泛实验结果表明,我们提议的DSP算法能够比比较方法更有力地实现重要的培训速度。