The creation of a quality summarization dataset is an expensive, time-consuming effort, requiring the production and evaluation of summaries by both trained humans and machines. If such effort is made in one language, it would be beneficial to be able to use it in other languages without repeating human annotations. To investigate how much we can trust machine translation of such a dataset, we translate the English dataset SummEval to seven languages and compare performance across automatic evaluation measures. We explore equivalence testing as the appropriate statistical paradigm for evaluating correlations between human and automated scoring of summaries. While we find some potential for dataset reuse in languages similar to the source, most summary evaluation methods are not found to be statistically equivalent across translations.
翻译:建立质量总和数据集是一项耗时费时费钱的工作,需要经过训练的人和机器制作和评价摘要。如果这种努力用一种语文进行,则最好能够用其他语文使用摘要,而不必重复人文说明。为了调查我们多么信任这种数据集的机器翻译,我们将英文数据集SummEval翻译成七种语文,并比较各种自动评价措施的性能。我们探索等同测试作为评价人与自动评分之间相互关系的适当统计范例。虽然我们发现有可能用类似于来源的语文再利用数据集,但大多数摘要评价方法在翻译中并不具有统计等同性。