Scaling model parameters usually improves model quality, but at the price of high computation overhead. Sparsely activated models, usually in the form of Mixture of Experts (MoE) architecture, have constant computation cost over their dense counterparts, thus providing opportunities to train and serve a large model at a reasonable cost. However, the distributed training of an MoE model is prone to low efficiency, mainly due to the interleaved all-to-all communication during model computation. This paper makes three main contributions. First, we systematically analyze the all-to-all overhead in distributed training of MoE. Second, we propose a new communication scheduling scheme based on tensor partitioning that prioritizes the all-to-all operations over other communication, due to its blocking nature. Third, we introduce expert packing that reduces the all-to-all transfer size and incorporates optimizations to mitigate its overheads. Both techniques effectively tackle the all-to-all bottleneck, and we integrate them into a new system called Lina. Experiments on an A100 GPU testbed show that Lina improves the training step time of popular NLP models by up to 1.73x over the state-of-the-art.
翻译:缩放模型参数通常能提高模型质量,但以高计算成本的价格提高模型质量。 通常以专家混合结构(Mixture of Expertal)为形式的粗略激活模型,其计算成本对密集的模型具有恒定的计算成本,从而提供了以合理成本培训和服务大型模型的机会。 但是,对模块模型的分布式培训容易降低效率,这主要是由于模型计算过程中的全到全交流互换。 本文做出了三大贡献。 首先,我们系统地分析教育部分布式培训中的所有到所有间接费用。 第二,我们提议基于强分法的新通信时间安排方案,由于阻塞性质,将所有到所有业务优先于其他通信。 第三,我们引入专家包装,减少全到所有转移的大小,并纳入优化以缓解其间接费用。 这两种技术都有效地解决了全到所有瓶颈,我们将其整合到一个名为Lina的新系统中。 A100 GPU测试台的实验显示,Lina将普通NLP模型的培训时间提高到1.73x。