Recent years have witnessed a growing list of systems for distributed data-parallel training. Existing systems largely fit into two paradigms, i.e., parameter server and MPI-style collective operations. On the algorithmic side, researchers have proposed a wide range of techniques to lower the communication via system relaxations: quantization, decentralization, and communication delay. However, most, if not all, existing systems only rely on standard synchronous and asynchronous stochastic gradient (SG) based optimization, therefore, cannot take advantage of all possible optimizations that the machine learning community has been developing recently. Given this emerging gap between the current landscapes of systems and theory, we build BAGUA, a MPI-style communication library, providing a collection of primitives, that is both flexible and modular to support state-of-the-art system relaxation techniques of distributed training. Powered by this design, BAGUA has a great ability to implement and extend various state-of-the-art distributed learning algorithms. In a production cluster with up to 16 machines (128 GPUs), BAGUA can outperform PyTorch-DDP, Horovod and BytePS in the end-to-end training time by a significant margin (up to 2 times) across a diverse range of tasks. Moreover, we conduct a rigorous tradeoff exploration showing that different algorithms and system relaxations achieve the best performance over different network conditions.
翻译:在算法方面,研究人员提出了一系列广泛的技术,通过系统放松降低通信:量化、权力下放和通信延迟。然而,即使不是全部,大多数现有系统都只依靠标准的同步和非同步的基于随机的梯度优化(SG),因此无法利用机器学习界最近发展的所有可能的优化。鉴于当前系统和理论景观之间正在出现的差距,我们建设了BAGUA,这是MPI式的通信图书馆,提供原始的集合,既灵活又模块化,以支持分布式培训的先进系统放松技术。根据这一设计,BAGUA拥有巨大的能力,可以实施和扩展各种以现代技术为基础的分布式学习算法。在一个生产集群中,机器学习界最近一直在发展到16个机器(128 GPUs),BAGUA可以超越PyToch-D型通信库,提供原始的集合,既灵活又模块化,用以支持最先进的系统松懈式培训。我们最终在不同的贸易周期里,可以完成一个严格的贸易周期,可以显示一个严格的贸易周期,可以显示一种严格的贸易周期。