Transformer-based neural models are used in many AI applications. Training these models is expensive, as it takes huge GPU resources and long duration. It is challenging because typical data like sentences have variable lengths, and Transformer's computation patterns are more complex than convolutional neural networks. Existing systems either only focus on model inference or optimization for only BERT-like encoder models. In this paper, we present LightSeq2, a system to accelerate training for a general family of Transformer models on GPUs. We propose a series of GPU optimization techniques tailored to the specific computation flow and memory access patterns of Transformer models. LightSeq2 supports many model architectures, including BERT (encoder-only), GPT (decoder-only), Transformer (encoder-decoder), and vision Transformer. Our experiments for a variety of models and benchmarks show that LightSeq2 is consistently faster (1.4-3.5x) than previous systems on different GPUs. In particular, it gains 308% training speedup compared with existing systems on a large public machine translation benchmark (WMT14 English-German).
翻译:在许多 AI 应用程序中使用了基于变换器的神经模型。 培训这些模型费用昂贵, 因为它需要巨大的 GPU 资源并需要较长的时间。 它具有挑战性, 因为典型的数据, 比如句子的长度不同, 而变换器的计算模式比进化神经网络更加复杂。 现有的系统要么只关注BERT 类编码模型的模型推导或优化。 在本文中, 我们展示了 LightSeq2, 该系统可以加速对 GPUs 上变换器模型的一般家庭的培训。 我们建议了一系列适合变换器模型具体计算流程和记忆存取模式的 GPU 优化技术 。 LightSeq2 支持许多模型结构, 包括 BERT( 仅使用编码)、 GPT( 仅使用变换码器)、 GPT( 仅使用变换器) 和 视觉变换器。 我们对各种模型和基准的实验显示, LightSeq2 与以前不同的 GPUs 系统相比, 它比现有的大规模公共机器翻译基准( WMT14 英德) 提高了308% 培训速度。