Transformer-based models have proven to be powerful in many natural language, computer vision, and speech recognition applications. It is expensive to train these types of models due to unfixed input length, complex computation, and large numbers of parameters. Existing systems either only focus on efficient inference or optimize only BERT-like encoder models. In this paper, we present LightSeq, a system for efficient training of Transformer-based models on GPUs. We propose a series of GPU optimization techniques tailored to computation flow and memory access patterns of neural layers in Transformers. LightSeq supports a variety of network architectures, including BERT (encoder-only), GPT (decoder-only), and Transformer (encoder-decoder). Our experiments on GPUs with varying models and datasets show that LightSeq is 1.4-3.5x faster than previous systems. In particular, it gains 308% training speedup compared with existing systems on a large public machine translation benchmark (WMT14 English-German).
翻译:以变换器为基础的模型在许多自然语言、计算机视觉和语音识别应用程序中被证明是强大的。 由于未固定输入长度、复杂计算和大量参数,培训这些类型的模型费用昂贵。 现有的系统要么只侧重于高效的推断, 要么只优化类似 BERT 的编码器模型。 在本文中, 我们展示了 LightSeq, 一个高效培训基于变换器的GPU模型的系统。 我们建议了一系列的GPU优化技术, 用于计算变换器神经层的流和内存存访问模式。 LightSeq 支持各种网络结构, 包括 BERT( 仅使用编码器)、 GPT(仅使用解码器)和变换器( ender- decoder ) 。 我们在使用不同模型和数据集的GPUPS的实验显示, LightSeq比以前的系统快1.4-3.5x。 特别是, LightSeq比现有的大型公共机器翻译基准(WMT14 英德) 加快了308%的培训速度。