In Neural Machine Translation, it is typically assumed that the sentence with the highest estimated probability should also be the translation with the highest quality as measured by humans. In this work, we question this assumption and show that model estimates and translation quality only vaguely correlate. We apply Minimum Bayes Risk (MBR) decoding on unbiased samples to optimize diverse automated metrics of translation quality as an alternative inference strategy to beam search. Instead of targeting the hypotheses with the highest model probability, MBR decoding extracts the hypotheses with the highest estimated quality. Our experiments show that the combination of a neural translation model with a neural reference-based metric, BLEURT, results in significant improvement in human evaluations. This improvement is obtained with translations different from classical beam-search output: these translations have much lower model likelihood and are less favored by surface metrics like BLEU.
翻译:在神经机器翻译中,通常假定估计概率最高的句子也应该是人类测算的最高质量的翻译。在这项工作中,我们质疑这一假设,并表明模型估计和翻译质量只有模糊的关联性。我们在无偏倚的样本上进行最低贝氏风险解码,优化翻译质量的多种自动计量,作为进行波束搜索的替代推理策略。MBR解码不是针对模型概率最高的假设,而是提取估计质量最高的假说。我们的实验表明,神经翻译模型与神经参考度衡量标准BLEURT相结合,可以大大改进人类评价。这种改进是用不同于古典光学研究产出的翻译取得的:这些翻译的模型可能性要小得多,并且不那么受到像BLEU这样的表面测量的偏好。