Transformers are state-of-the-art models for a variety of sequence modeling tasks. At their core is an attention function which models pairwise interactions between the inputs at every timestep. While attention is powerful, it does not scale efficiently to long sequences due to its quadratic time and space complexity in the sequence length. We propose RFA, a linear time and space attention that uses random feature methods to approximate the softmax function, and explore its application in transformers. RFA can be used as a drop-in replacement for conventional softmax attention and offers a straightforward way of learning with recency bias through an optional gating mechanism. Experiments on language modeling and machine translation demonstrate that RFA achieves similar or better performance compared to strong transformer baselines. In the machine translation experiment, RFA decodes twice as fast as a vanilla transformer. Compared to existing efficient transformer variants, RFA is competitive in terms of both accuracy and efficiency on three long text classification datasets. Our analysis shows that RFA's efficiency gains are especially notable on long sequences, suggesting that RFA will be particularly useful in tasks that require working with large inputs, fast decoding speed, or low memory footprints.
翻译:最先进的变异器是各种序列建模任务的最新模型。 在其核心部分, 是一个关注功能, 模型在每一时间步骤中对投入的相互作用进行配对。 虽然注意是强大的, 但由于其四边时间和序列长度的空间复杂性, 它并没有有效地向长序列扩展。 我们建议RFA, 一个使用随机特征方法来接近软模功能的线性时间和空间关注, 并探索其在变压器中的应用。 RFA可以用作常规软体关注的低位替代器, 并且通过一个可选的格子机制, 提供一个直截了当的学习方式, 透视性偏差。 关于语言建模和机器翻译的实验表明, RFA 与强的变异器基线相比, 具有相似或更好的性能。 在机器翻译实验中, RFA 解码速度是香草变异体的两倍。 与现有的高效变异体相比, RFA 在三个长的文本分类数据集的准确性和效率方面具有竞争力。 我们的分析表明, RFA 效率的增益在长序列上特别明显, 表明, 意味着REFAA 将特别有用, 与高的存储速度。