Non-autoregressive machine translation (NAT) has recently made great progress. However, most works to date have focused on standard translation tasks, even though some edit-based NAT models, such as the Levenshtein Transformer (LevT), seem well suited to translate with a Translation Memory (TM). This is the scenario considered here. We first analyze the vanilla LevT model and explain why it does not do well in this setting. We then propose a new variant, TM-LevT, and show how to effectively train this model. By modifying the data presentation and introducing an extra deletion operation, we obtain performance that are on par with an autoregressive approach, while reducing the decoding load. We also show that incorporating TMs during training dispenses to use knowledge distillation, a well-known trick used to mitigate the multimodality issue.
翻译:最近,非自动机器翻译(NAT)取得了很大进展,然而,迄今为止,大多数工作都集中在标准翻译任务上,尽管一些基于编辑的NAT模型,如Levenshtein变换器(LevT),似乎非常适合用翻译内存翻译。这就是这里考虑的情景。我们首先分析香草LevT模型,并解释为什么它在这个环境中不顺利。我们然后提出一个新的变式,TM-LevT, 并展示如何有效地培训这一模型。通过修改数据列报和引入额外的删除操作,我们取得了与自动递增法相当的性能,同时减少了解码负荷。我们还表明,在培训期间将TMMS纳入了使用知识蒸馏的教学,这是用来缓解多式联运问题的众所周知的伎俩。