Transformer-based models have proven to be powerful in many natural language, computer vision, and speech recognition applications. It is expensive to train these types of models due to unfixed input length, complex computation, and large numbers of parameters. Existing systems either only focus on efficient inference or optimize only BERT-like encoder models. In this paper, we present LightSeq2, a system for efficient training of Transformer-based models on GPUs. We propose a series of GPU optimization techniques tailored to computation flow and memory access patterns of neural layers in Transformers. LightSeq2 supports a variety of network architectures, including BERT (encoder-only), GPT (decoder-only), and Transformer (encoder-decoder). Our experiments on GPUs with varying models and datasets show that LightSeq2 is 1.4-3.5x faster than previous systems. In particular, it gains 308% training speedup compared with existing systems on a large public machine translation benchmark (WMT14 English-German).
翻译:以变换器为基础的模型在许多自然语言、计算机视觉和语音识别应用中被证明是强大的。由于未固定输入长度、复杂计算和大量参数,培训这些类型的模型费用昂贵。现有的系统要么只侧重于高效推断,要么只优化类似于 BERT 的编码器模型。在本文中,我们展示了LightSeq2, 这是一种高效培训基于变换器的GPU模型的系统。我们建议了一系列的GPU优化技术,专门用于计算变换器神经层的流和内存存存访问模式。 LightSeq2 支持各种网络结构,包括BERT(只使用编码器)、GPT(只使用解码器)和变换器(只使用编码器)。我们用不同模型和数据集对GPUP的实验显示,LightSeq2比以前的系统快1.4-3.5x。特别是,在大型公共机器翻译基准(WMT14英德)上,它比现有的系统提高了308%的培训速度。