Huge neural network models have shown unprecedented performance in real-world applications. However, due to memory constraints, model parallelism must be utilized to host large models that would otherwise not fit into the memory of a single device. Previous methods like Megatron partition the parameters of the entire model among multiple devices, while each device has to accommodate the redundant activations in forward and backward pass. In this work, we propose Optimus, a highly efficient and scalable 2D-partition paradigm of model parallelism that would facilitate the training of infinitely large language models. In Optimus, activations are partitioned and distributed among devices, further reducing redundancy. In terms of isoefficiency, Optimus significantly outperforms Megatron. On 64 GPUs of TACC Frontera, Optimus achieves 1.48X speedup for training, 1.78X speedup for inference, and 8X increase in maximum batch size over Megatron. Optimus surpasses Megatron in scaling efficiency by a great margin. The code is available at https://github.com/xuqifan897/Optimus.
翻译:大型神经网络模型在现实世界应用中表现出了前所未有的性能,然而,由于记忆限制,必须使用模型平行模型来容纳大模型,否则这些模型将不适合单一设备的记忆。以前的方法,如威震天将整个模型参数在多个装置中分离,而每个装置必须适应前向和后向通道的冗余激活。在这项工作中,我们提议了Optimus,这是一个高度高效且可伸缩的2D模式平行模式,将便利无限大语言模型的培训。在Optimus,激活在设备之间进行分割和分配,进一步减少冗余。在等效方面,Optimus明显超过Megantron。在TACFronera的64GPUs上,Optimus达到1.48X速度训练,1.78X加速推断速度,在Megatron上最大分批量增加8X速度。Optimus超过Megatron, 将效率提升到一个很大的边缘。该代码可在 https://github.com/xfan897/Optiumimus查阅。