SOTA decentralized SGD algorithms can overcome the bandwidth bottleneck at the parameter server by using communication collectives like Ring All-Reduce for synchronization. While the parameter updates in distributed SGD may happen asynchronously there is still a synchronization barrier to make sure that the local training epoch at every learner is complete before the learners can advance to the next epoch. The delays in waiting for the slowest learners(stragglers) remain to be a problem in the synchronization steps of these state-of-the-art decentralized frameworks. In this paper, we propose the (de)centralized Non-blocking SGD (Non-blocking SGD) which can address the straggler problem in a heterogeneous environment. The main idea of Non-blocking SGD is to split the original batch into mini-batches, then accumulate the gradients and update the model based on finished mini-batches. The Non-blocking idea can be implemented using decentralized algorithms including Ring All-reduce, D-PSGD, and MATCHA to solve the straggler problem. Moreover, using gradient accumulation to update the model also guarantees convergence and avoids gradient staleness. Run-time analysis with random straggler delays and computational efficiency/throughput of devices is also presented to show the advantage of Non-blocking SGD. Experiments on a suite of datasets and deep learning networks validate the theoretical analyses and demonstrate that Non-blocking SGD speeds up the training and fastens the convergence. Compared with the state-of-the-art decentralized asynchronous algorithms like D-PSGD and MACHA, Non-blocking SGD takes up to 2x fewer time to reach the same training loss in a heterogeneous environment.
翻译:SOTA 分散的 SGD 算法可以通过使用 Ring All-Reduce 等通信集体来克服参数服务器的带宽瓶颈。 虽然分布式 SGD 中的参数更新可能不同步地发生。 虽然分布式 SGD 中的参数更新可能发生。 仍然有一个同步屏障, 以确保每个学习者在进入下一个时代之前都完成本地培训过程。 等待最慢学习者( straglers) 的延迟仍然是这些最先进的分散式框架同步步骤中的一个问题。 在本文中, 我们提议( 分散式的) 非封隔式 SGD (不设屏障 SGD ), 它可以解决杂交型环境问题。 不封式的SGGDGD 将原始批次分成成迷你球, 然后积累梯度, 并更新以完成式的迷你球塔为基础的模型。 使用平流式平流式平流式平流式平流式平流式平流式平流式平流式平流式平流式平流式平流式平流式平流式平流式平流式平流式平流式平流式平流式平流式平流式平流式平流式平流式平流式平流式平流式平流式平流式平流式平流式平流式平流式平流式平流式平流式平流式平流式平流式平流式平流式平流式平流式平流式平流。