The size of Transformer models is growing at an unprecedented pace. It has only taken less than one year to reach trillion-level parameters after the release of GPT-3 (175B). Training such models requires both substantial engineering efforts and enormous computing resources, which are luxuries most research teams cannot afford. In this paper, we propose PipeTransformer, which leverages automated and elastic pipelining and data parallelism for efficient distributed training of Transformer models. PipeTransformer automatically adjusts the pipelining and data parallelism by identifying and freezing some layers during the training, and instead allocates resources for training of the remaining active layers. More specifically, PipeTransformer dynamically excludes converged layers from the pipeline, packs active layers into fewer GPUs, and forks more replicas to increase data-parallel width. We evaluate PipeTransformer using Vision Transformer (ViT) on ImageNet and BERT on GLUE and SQuAD datasets. Our results show that PipeTransformer attains a 2.4 fold speedup compared to the state-of-the-art baseline. We also provide various performance analyses for a more comprehensive understanding of our algorithmic and system-wise design. We also develop open-sourced flexible APIs for PipeTransformer, which offer a clean separation among the freeze algorithm, model definitions, and training accelerations, hence allowing it to be applied to other algorithms that require similar freezing strategies.
翻译:变换器模型的规模正在以前所未有的速度增长。 在GPT-3(175B)发布后,只需不到一年的时间就达到万亿级参数。 培训这些模型需要大量的工程努力和巨大的计算资源,而这些是大多数研究团队都负担不起的奢侈品。 在本文中,我们提议PipeTransext, 利用自动和弹性的管道和数据平行法来提高变换器模型的高效分布培训。 PipeTranseder自动调整管道和数据平行,方法是在培训期间确定和冻结一些层,而不是为其余活动层的培训分配资源。 更具体地说, PipeTranserent 动态地排除管道中聚集层,将活动层包装成更少的GPUPS, 以及更多的复制品来增加数据拉伸缩宽度。 我们用图像网络上的愿景变换器和GLUE和SQAD数据库上的BERT来评估PERT。 我们的结果表明, Pipetranstransfor化软件比其他应用的基线更快地增加了2.4倍的叠速度。 我们还提供一种更透明的变现的快速的算法,, 以提供一种更精确的快速的系统, 以提供一种更精确的系统。