Relying entirely on an attention mechanism, the Transformer introduced by Vaswani et al. (2017) achieves state-of-the-art results for machine translation. In contrast to recurrent and convolutional neural networks, it does not explicitly model relative or absolute position information in its structure. Instead, it requires adding representations of absolute positions to its inputs. In this work we present an alternative approach, extending the self-attention mechanism to efficiently consider representations of the relative positions, or distances between sequence elements. On the WMT 2014 English-to-German and English-to-French translation tasks, this approach yields improvements of 1.3 BLEU and 0.3 BLEU over absolute position representations, respectively. Notably, we observe that combining relative and absolute position representations yields no further improvement in translation quality. We describe an efficient implementation of our method and cast it as an instance of relation-aware self-attention mechanisms that can generalize to arbitrary graph-labeled inputs.
翻译:瓦斯瓦尼等人(2017年)完全依赖关注机制,瓦斯瓦尼等人(2017年)引进的变换器实现了机器翻译的最新结果,与经常和革命性神经网络不同,它没有在其结构中明确模拟相对或绝对位置信息,相反,它要求在其投入中增加绝对位置的表示,在这项工作中,我们提出了一个替代方法,将自我注意机制扩大到有效考虑相对位置的表示,或序列要素之间的距离。在2014年WMT英语对德语和英语对法语翻译任务中,这一方法分别比绝对位置表示改进1.3 BLEU和0.3 BLEU。值得注意的是,我们认为,将相对位置和绝对位置表示结合起来不会进一步提高翻译质量。我们描述了我们方法的高效实施,并把它描述为具有关联意识的自我保留机制的范例,这种机制可以笼统地概括任意的图形标签输入。