Recently years have witnessed a growing list of systems for distributed data-parallel training. Existing systems largely fit into two paradigms, i.e., parameter server and MPI-style collective operations. On the algorithmic side, researchers have proposed a wide range of techniques to lower the communication via system relaxations: quantization, decentralization, and communication delay. However, most, if not all, existing systems only rely on standard synchronous and asynchronous stochastic gradient (SG) based optimization, therefore, cannot take advantage of all possible optimizations that the machine learning community has been developing recently. Given this emerging gap between the current landscapes of systems and theory, we build BAGUA, a communication framework whose design goal is to provide a system abstraction that is both flexible and modular to support state-of-the-art system relaxation techniques of distributed training. Powered by the new system design, BAGUA has a great ability to implement and extend various state-of-the-art distributed learning algorithms. In a production cluster with up to 16 machines (128 GPUs), BAGUA can outperform PyTorch-DDP, Horovod and BytePS in the end-to-end training time by a significant margin (up to 1.95 times) across a diverse range of tasks. Moreover, we conduct a rigorous tradeoff exploration showing that different algorithms and system relaxations achieve the best performance over different network conditions.
翻译:在算法方面,研究人员提出了通过系统放松降低通信的广泛技术:量化、权力下放和通信延迟。然而,即使不是全部,大多数现有系统都仅仅依靠标准的同步和非同步的基于随机的梯度优化(SG),因此无法利用机器学习界最近发展的所有可能的优化。鉴于当前系统和理论景观之间正在出现的差距,我们建设了BAGUA,这是一个设计框架,其设计目标是提供一个系统抽象化,既灵活又模块化,以支持最先进的系统分散培训的放松技术。如果不是全部的话,大多数现有系统都只依靠标准的同步和非同步的分布式梯度优化(SG),因此无法利用机器学习界最近发展的所有可能的优化。在有16台机器(128台GPUs)的生产集群中,BAGUA可以超越PyToch-D的当前景观与理论之间正在出现的差距。一个设计框架,其设计目标是提供系统抽象的系统抽象化,既能支持最先进的系统松动式的系统松动式的升级,又能显示我们最终完成一个严格的贸易周期的灵活度。