We show that Transformer encoder architectures can be massively sped up, with limited accuracy costs, by replacing the self-attention sublayers with simple linear transformations that "mix" input tokens. These linear transformations, along with standard nonlinearities in feed-forward layers, prove competent at modeling semantic relationships in several text classification tasks. Most surprisingly, we find that replacing the self-attention sublayer in a Transformer encoder with a standard, unparameterized Fourier Transform achieves 92-97% of the accuracy of BERT counterparts on the GLUE benchmark, but trains nearly seven times faster on GPUs and twice as fast on TPUs. The resulting model, FNet, also scales very efficiently to long inputs. Specifically, when compared to the "efficient" Transformers on the Long Range Arena benchmark, FNet matches the accuracy of the most accurate models, but is faster than the fastest models across all sequence lengths on GPUs (and across relatively shorter lengths on TPUs). Finally, FNet has a light memory footprint and is particularly efficient at smaller model sizes: for a fixed speed and accuracy budget, small FNet models outperform Transformer counterparts.
翻译:最令人惊讶的是,我们发现,在变换器编码器中,以标准的、非参数化的 Fourier 变换器取代自控子层,其精度成本有限,可以用简单的线性变换取代自控子层,即“混合”输入符号。这些线性变换,加上标准的fef-forward层非线性变换,证明在多个文本分类任务中有能力模拟语义关系。最令人惊讶的是,我们发现,用标准的、非参数化的 Fourier 变换器取代自控子层,可以在GLUE 基准中使BERT对应方的精度达到92- 97%,但在GPUs上培训速度快近7倍,在TPUs上培训速度快一倍。由此产生的模型 FNet 也非常高效到长输入速度。 具体地说, 当与长距离基准线上的“ 高效” 变换器相比, FNet 匹配最精确的模型,但比GPUS上所有序列长度的最快模型(在TPUPUs上的较短长度上)更快。 最后, FNet有较小型的内存留足迹,在更小的模型上特别高效的模型变换码。