Automatic metrics are commonly used as the exclusive tool for declaring the superiority of one machine translation system's quality over another. The community choice of automatic metric guides research directions and industrial developments by deciding which models are deemed better. Evaluating metrics correlations has been limited to a small collection of human judgements. In this paper, we corroborate how reliable metrics are in contrast to human judgements on - to the best of our knowledge - the largest collection of human judgements. We investigate which metrics have the highest accuracy to make system-level quality rankings for pairs of systems, taking human judgement as a gold standard, which is the closest scenario to the real metric usage. Furthermore, we evaluate the performance of various metrics across different language pairs and domains. Lastly, we show that the sole use of BLEU negatively affected the past development of improved models. We release the collection of human judgements of 4380 systems, and 2.3 M annotated sentences for further analysis and replication of our work.
翻译:自动衡量标准通常作为宣布一个机器翻译系统质量优于另一个机器翻译系统质量的唯一工具。社区选择自动衡量指南研究方向和工业发展,决定哪些模型被认为更好。评价衡量标准的相关性仅限于少量的人类判断。在本文中,我们证实了可靠的衡量标准与人类对----据我们所知----最大的人类判断集的人类判断量的对比。我们调查哪些衡量标准具有最高准确性,可以对两种系统进行系统一级质量排名,将人类判断作为黄金标准,这是最接近于实际衡量使用的情景。此外,我们评估了不同语言对口和不同领域的各种衡量标准的业绩。最后,我们表明,仅使用BLEU对过去改进模型的开发产生了负面影响。我们发布了4380个系统人类判断的汇编,以及2.3 M 附加说明的句子,用于进一步分析和复制我们的工作。