Transformer models have achieved superior performance in various natural language processing tasks. However, the quadratic computational cost of the attention mechanism limits its practicality for long sequences. There are existing attention variants that improve the computational efficiency, but they have limited ability to effectively compute global information. In parallel to Transformer models, state space models (SSMs) are tailored for long sequences, but they are not flexible enough to capture complicated local information. We propose SPADE, short for $\underline{\textbf{S}}$tate s$\underline{\textbf{P}}$ace $\underline{\textbf{A}}$ugmente$\underline{\textbf{D}}$ Transform$\underline{\textbf{E}}$r. Specifically, we augment a SSM into the bottom layer of SPADE, and we employ efficient local attention methods for the other layers. The SSM augments global information, which complements the lack of long-range dependency issue in local attention methods. Experimental results on the Long Range Arena benchmark and language modeling tasks demonstrate the effectiveness of the proposed method. To further demonstrate the scalability of SPADE, we pre-train large encoder-decoder models and present fine-tuning results on natural language understanding and natural language generation tasks.
翻译:在各种自然语言处理任务中,变换模型取得了优异的性能。然而,关注机制的二次计算成本限制了其对于长序列的实用性。现有关注变量提高了计算效率,但是它们有效计算全球信息的能力有限。与变换模型平行,国家空间模型(SSMS)为长序列量身定制,但是它们不够灵活,无法捕捉复杂的本地信息。我们建议SPADE,短于$underline_textbf{S @$$tate s$=untextbf{P$$$_underline_underline_textbf{A$$_ugment$$\underline_textline_textbf{A$$_gunate$\untextline_textb{D ⁇ $zurver$$_underline_textline_textruf{D$$$$_Equange$quartf{{{$$$$$$ration $rational developtal modection), 我们进一步展示了当前SPAREDional-dealdeal-dealdeal deal resulal resulation resulation asulation asulational-deleglegal laismational sal laismmal ex ex ex lade 模型,我们进一步展示了当前快速模型, 的自然生成,我们 和智能模型,我们做了一个快速模型。 的模型。 和智能模型。 的模型,进一步展示了当前快速化了。 和智能模型。 的模型的模型。