Transformer models have achieved state-of-the-art results in a wide range of NLP tasks including summarization. Training and inference using large transformer models can be computationally expensive. Previous work has focused on one important bottleneck, the quadratic self-attention mechanism in the encoder. Modified encoder architectures such as LED or LoBART use local attention patterns to address this problem for summarization. In contrast, this work focuses on the transformer's encoder-decoder attention mechanism. The cost of this attention becomes more significant in inference or training approaches that require model-generated histories. First, we examine the complexity of the encoder-decoder attention. We demonstrate empirically that there is a sparse sentence structure in document summarization that can be exploited by constraining the attention mechanism to a subset of input sentences, whilst maintaining system performance. Second, we propose a modified architecture that selects the subset of sentences to constrain the encoder-decoder attention. Experiments are carried out on abstractive summarization tasks, including CNN/DailyMail, XSum, Spotify Podcast, and arXiv.
翻译:变异器模型在一系列NLP任务中取得了最先进的结果, 包括概括化。 使用大型变异器模型的培训和推断可以计算成本。 先前的工作侧重于一个重要的瓶颈, 编码器中的二次自我注意机制。 LED 或 LoBART 等变异的编码器结构使用本地关注模式来解决这个问题来进行总结。 相比之下, 这项工作侧重于变异器的编码器解码器注意机制。 这种关注的成本在推断或培训方法方面变得更为重要, 需要模型生成的历史。 首先, 我们检查编码解变器注意的复杂性。 我们从经验上表明, 将注意机制限制为输入句子集, 从而可以利用文件的简略语结构。 其次, 我们提出一个修改后的结构, 选择子句子子来限制编码解码器的注意。 实验是在抽象的合成任务上进行的, 包括 CNN/ DailyMail、 XSMOXricast、 和 SMOXrifrical。