Multilingual Neural Machine Translation (NMT) enables one model to serve all translation directions, including ones that are unseen during training, i.e. zero-shot translation. Despite being theoretically attractive, current models often produce low quality translations -- commonly failing to even produce outputs in the right target language. In this work, we observe that off-target translation is dominant even in strong multilingual systems, trained on massive multilingual corpora. To address this issue, we propose a joint approach to regularize NMT models at both representation-level and gradient-level. At the representation level, we leverage an auxiliary target language prediction task to regularize decoder outputs to retain information about the target language. At the gradient level, we leverage a small amount of direct data (in thousands of sentence pairs) to regularize model gradients. Our results demonstrate that our approach is highly effective in both reducing off-target translation occurrences and improving zero-shot translation performance by +5.59 and +10.38 BLEU on WMT and OPUS datasets respectively. Moreover, experiments show that our method also works well when the small amount of direct data is not available.
翻译:多语言神经机器翻译(NMT)使一个模型能够为所有翻译方向服务,包括培训期间看不见的翻译,即零光翻译。尽管在理论上具有吸引力,但目前的模型往往产生低质量翻译,通常甚至不能以正确的目标语言产生产出。在这项工作中,我们注意到,即使在强大的多语种系统中,在大规模多语种公司培训后,离目标翻译也占主导地位。为解决这一问题,我们提议了一种在代表级别和梯度一级规范NMT模式的联合方法。在代表级别,我们利用一个辅助目标语言预测任务来规范脱coder输出以保留目标语言的信息。在梯度一级,我们利用少量的直接数据(千对词配对)来规范模型梯度。我们的结果表明,我们的方法非常有效,不仅减少了离目标翻译发生的情况,而且分别以+5.59和+10.38 BLEU在WMT和OPUS数据集上提高了零点翻译的性能。此外,实验表明,当少量的直接数据没有可用时,我们的方法也非常有效。