We show that Transformer encoder architectures can be sped up, with limited accuracy costs, by replacing the self-attention sublayers with simple linear transformations that "mix" input tokens. These linear mixers, along with standard nonlinearities in feed-forward layers, prove competent at modeling semantic relationships in several text classification tasks. Most surprisingly, we find that replacing the self-attention sublayer in a Transformer encoder with a standard, unparameterized Fourier Transform achieves 92-97% of the accuracy of BERT counterparts on the GLUE benchmark, but trains 80% faster on GPUs and 70% faster on TPUs at standard 512 input lengths. At longer input lengths, our FNet model is significantly faster: when compared to the "efficient" Transformers on the Long Range Arena benchmark, FNet matches the accuracy of the most accurate models, while outpacing the fastest models across all sequence lengths on GPUs (and across relatively shorter lengths on TPUs). Finally, FNet has a light memory footprint and is particularly efficient at smaller model sizes; for a fixed speed and accuracy budget, small FNet models outperform Transformer counterparts.
翻译:我们展示了变换器编码器结构可以通过以简单的线性变换取代自控子层,以“混合”输入符号替换“混合”输入符号。这些线性搅拌器,连同标准的非线性进化层,证明有能力在一些文本分类任务中模拟语义关系。最令人惊讶的是,我们发现,在变换器编码器中,用标准的、非参数化的Fourier变换器取代自控子层,可以使GLUE基准中BER对应方的精确度达到92-97%,但在标准512输入长度的GPUs上培训速度加快80%,在TPUs上培训速度加快70%。在较长的输入长度上,我们的FNet模型速度要快得多:与长距离基准中的“高效”变换器相比,FNet与最精确模型的精确度相匹配,同时超过GPUS所有序列长度中最快的模型(在TPUS上相对较短的长度上)的精确度的精确度,但FNet的存储力足迹模型较轻,在更小的模型中特别高效,在更小的模型和最精度上,在更精确的模型上,在更精确的模型中,在更精确的模型中,在更精确度上是更精确的模型。