Data parallelism does a good job in speeding up the training. However, when it comes to the case when the memory of a single device can not host a whole model, data parallelism would not have the chance to do anything. Another option is to split the model by operator, or horizontally. Megatron-LM introduced a 1-Dimensional distributed method to use GPUs to speed up the training process. Optimus is a 2D solution for distributed tensor parallelism. However, these methods have a high communication overhead and a low scaling efficiency on large-scale computing clusters. To solve this problem, we investigate the 2.5-Dimensional distributed tensor parallelism.Introduced by Solomonik et al., 2.5-Dimensional Matrix Multiplication developed an effective method to perform multiple Cannon's algorithm at the same time to increase the efficiency. With many restrictions of Cannon's Algorithm and a huge amount of shift operation, we need to invent a new method of 2.5-dimensional matrix multiplication to enhance the performance. Absorbing the essence from both SUMMA and 2.5-Dimensional Matrix Multiplication, we introduced SUMMA2.5-LM for language models to overcome the abundance of unnecessary transmission loss result from the increasing size of language model parallelism. Compared to previous 1D and 2D model parallelization of language models, our SUMMA2.5-LM managed to reduce the transmission cost on each layer, which could get a 1.45X efficiency according to our weak scaling result between 2.5-D [4,4,4] arrangement and 2-D [8,8,1] arrangement.
翻译:数据平行在加快培训速度方面是一个很好的工作。 但是, 当一个设备的记忆无法容纳整个模型时, 数据平行将没有机会做任何事情。 另一个选项是按操作员或水平分割模型。 威震天- LM 引入了 1 - 不同分布法以使用 GPU 加速培训进程。 Optimus 是分布式超光线的2D 解决方案 。 但是, 这些方法在大型计算组群中具有高通信管理费和低缩放效率。 为了解决这个问题, 我们调查了 2.5 dimenmental 分布式超光速平行模型。 由 Solomonik 等人( Inviducted ) 和 2.5 divisional 矩阵乘以开发一种有效的方法来同时执行多个 Cannonon的算法以加快培训进程。 由于对 Canonon的 Algorithmtm的多种限制和大量的转换操作, 我们需要发明一种在 2.5 基度矩阵组合组合组合中进行新的方法, 来提高性。 从 SUMMA 和 2.5 递增 Slimal 的Slimal IM IM 的Sileval 的模型到 IM 25 25 25 25 25 递增结果。