Transformer is a powerful model for text understanding. However, it is inefficient due to its quadratic complexity to input sequence length. Although there are many methods on Transformer acceleration, they are still either inefficient on long sequences or not effective enough. In this paper, we propose Fastformer, which is an efficient Transformer model based on additive attention. In Fastformer, instead of modeling the pair-wise interactions between tokens, we first use additive attention mechanism to model global contexts, and then further transform each token representation based on its interaction with global context representations. In this way, Fastformer can achieve effective context modeling with linear complexity. Extensive experiments on five datasets show that Fastformer is much more efficient than many existing Transformer models and can meanwhile achieve comparable or even better long text modeling performance.
翻译:变换器是一个强大的文本理解模型。 但是,它由于输入序列长度的四重复杂度而效率低下。 虽然变换器加速率有许多方法, 但它们在长序列上仍然效率低下, 或不够有效。 在本文中, 我们建议 Fastexer, 这是一种基于添加关注的高效变换器模型。 在 Fasterent 中, 我们首先使用添加式注意机制, 而不是模拟代号之间的双向互动, 而不是建模全球背景, 然后根据它与全球背景演示的相互作用, 进一步转换每个象征性的表示。 这样, 快速变换器可以实现具有线性复杂性的有效环境建模。 对五个数据集的广泛实验显示, Fastexerve 比许多现有的变换器模型效率高得多, 同时还可以实现可比的甚至更好的长文本建模效果。