Evaluating large summarization corpora using humans has proven to be expensive from both the organizational and the financial perspective. Therefore, many automatic evaluation metrics have been developed to measure the summarization quality in a fast and reproducible way. However, most of the metrics still rely on humans and need gold standard summaries generated by linguistic experts. Since BLANC does not require golden summaries and supposedly can use any underlying language model, we consider its application to the evaluation of summarization in German. This work demonstrates how to adjust the BLANC metric to a language other than English. We compare BLANC scores with the crowd and expert ratings, as well as with commonly used automatic metrics on a German summarization data set. Our results show that BLANC in German is especially good in evaluating informativeness.
翻译:从组织和财政角度来说,评估使用人类的大型综合公司证明是昂贵的,因此,已经制定了许多自动评价指标,以快速和可复制的方式衡量综合质量,然而,大多数衡量标准仍然依赖人,需要语言专家生成的黄金标准摘要。由于BLANC不需要黄金摘要,而且据说可以使用任何基本语言模型,因此我们考虑将其应用于评价德语的汇总。这项工作表明如何将BLANC衡量标准调整为英语以外的语言。我们比较了BLANC的评分与人群和专家的评分,以及在德国的汇总数据集中常用的自动衡量标准。我们的结果显示,德国的BLANC在评价信息性方面特别出色。