Automatic metrics are commonly used as the exclusive tool for declaring the superiority of one machine translation system's quality over another. The community choice of automatic metric guides research directions and industrial developments by deciding which models are deemed better. Evaluating metrics correlations with sets of human judgements has been limited by the size of these sets. In this paper, we corroborate how reliable metrics are in contrast to human judgements on -- to the best of our knowledge -- the largest collection of judgements reported in the literature. Arguably, pairwise rankings of two systems are the most common evaluation tasks in research or deployment scenarios. Taking human judgement as a gold standard, we investigate which metrics have the highest accuracy in predicting translation quality rankings for such system pairs. Furthermore, we evaluate the performance of various metrics across different language pairs and domains. Lastly, we show that the sole use of BLEU impeded the development of improved models leading to bad deployment decisions. We release the collection of 2.3M sentence-level human judgements for 4380 systems for further analysis and replication of our work.
翻译:自动衡量标准通常用作宣布一个机器翻译系统质量优于另一个机器翻译系统质量的独家工具。 社区选择自动衡量指南研究方向和工业发展,确定哪些模型被认为更好。 评估衡量标准与人类判断的关联因这些组合的大小而受到限制。 在本文中,我们证实可靠的衡量标准与人类判断 -- -- 据我们所知,这是文献中报告的最大数量 -- -- 判决书汇编。 可以说,两种系统的对称排序是研究或部署情景中最常见的评价任务。 以人类判断为金标准,我们调查在预测这些系统对口的翻译质量排名方面哪些指标具有最高准确性。 此外,我们评估不同语文对口和不同领域的各种衡量标准的业绩。 最后,我们表明,仅使用BLEU就阻碍了改进模型的发展,导致错误的部署决定。 我们发布了4380系统2 300M判决级人类判断的汇编,用于进一步分析和复制我们的工作。