Several neural-based metrics have been recently proposed to evaluate machine translation quality. However, all of them resort to point estimates, which provide limited information at segment level. This is made worse as they are trained on noisy, biased and scarce human judgements, often resulting in unreliable quality predictions. In this paper, we introduce uncertainty-aware MT evaluation and analyze the trustworthiness of the predicted quality. We combine the COMET framework with two uncertainty estimation methods, Monte Carlo dropout and deep ensembles, to obtain quality scores along with confidence intervals. We compare the performance of our uncertainty-aware MT evaluation methods across multiple language pairs from the QT21 dataset and the WMT20 metrics task, augmented with MQM annotations. We experiment with varying numbers of references and further discuss the usefulness of uncertainty-aware quality estimation (without references) to flag possibly critical translation mistakes.
翻译:最近提出了若干基于神经的衡量标准,以评价机器翻译质量,然而,所有这些指标都采用点数估计,在部分一级提供有限信息,更糟糕的是,它们接受关于吵闹、偏颇和稀少的人类判断的培训,往往导致质量预测不可靠。在本文件中,我们引入了具有不确定性的MT评估,并分析了预测质量的可信度。我们将知识与技术伦理框架与两种不确定性估计方法(蒙特卡洛辍学和深层组合)结合起来,以获得质量分数和信心间隔。我们比较了我们从QT21数据集和WMT20衡量标准任务中多种语言组合的不确定性-认识MT评价方法的绩效,并以MQM说明加以扩充。我们尝试了不同数量的参考,并进一步讨论了不确定性质量估计(不提及)的效用,以标出可能的关键性翻译错误。