One of the challenges for current sequence to sequence (seq2seq) models is processing long sequences, such as those in summarization and document level machine translation tasks. These tasks require the model to reason at the token level as well as the sentence and paragraph level. We design and study a new Hierarchical Attention Transformer-based architecture (HAT) that outperforms standard Transformers on several sequence to sequence tasks. Furthermore, our model achieves state-of-the-art ROUGE scores on four summarization tasks, including PubMed, arXiv, CNN/DM, SAMSum, and AMI. Our model outperforms document-level machine translation baseline on the WMT20 English to German translation task. We investigate what the hierarchical layers learn by visualizing the hierarchical encoder-decoder attention. Finally, we study hierarchical learning on encoder-only pre-training and analyze its performance on classification tasks.
翻译:目前序列序列(seq2seq)模型的挑战之一是处理长序列,如汇总和文档级机器翻译任务。这些任务要求模型在象征性级别以及句级和段落级别上说明理由。我们设计并研究一个新的等级式关注变换器结构(HAT),它使标准变换器在若干序列上优于序列任务。此外,我们的模型在四个总结任务上达到了最先进的ROUGE分数,包括PubMed、arXiv、CNN/DM、SAMSum和AMI。我们的模型在WMT20英语到德国翻译任务上超越了文件级机器翻译基线。我们调查了通过直观显示分级编码器-解码器注意力而了解到的等级层。最后,我们研究了只进行编码器前训练的分级学习情况,并分析了分类任务的业绩。