Despite its original goal to jointly learn to align and translate, prior researches suggest that Transformer captures poor word alignments through its attention mechanism. In this paper, we show that attention weights DO capture accurate word alignments and propose two novel word alignment induction methods Shift-Att and Shift-AET. The main idea is to induce alignments at the step when the to-be-aligned target token is the decoder input rather than the decoder output as in previous work. Shift-Att is an interpretation method that induces alignments from the attention weights of Transformer and does not require parameter update or architecture change. Shift-AET extracts alignments from an additional alignment module which is tightly integrated into Transformer and trained in isolation with supervision from symmetrized Shift-Att alignments. Experiments on three publicly available datasets demonstrate that both methods perform better than their corresponding neural baselines and Shift-AET significantly outperforms GIZA++ by 1.4-4.8 AER points.
翻译:尽管最初的目标是共同学习对齐和翻译, 先前的研究显示, 变换器通过其关注机制捕捉了差字对齐。 在本文中, 我们显示, 注意权重 DO 捕捉了准确的字对齐, 并提出了两种新颖的词对齐上岗方法 Shift- Att 和 Shift- AET 。 主要的想法是, 当对齐目标符号是拆解器输入而不是像先前工作那样的解码输出时, 使步调对齐。 Shift- Att 是一种解释方法, 吸引变换器的注意权重的对齐, 不需要参数更新或结构改变。 Shift- AET 提取了另一个对齐模块的对齐, 该模块与变换器紧密结合, 并在与对齐的变换式调整后进行单独培训。 对三种公开可用的数据集的实验表明, 这两种方法都比相应的神经基线和 Shift- AET 都比GIZA++++ + 1.4-4.8 AER 指 。