Summarization evaluation remains an open research problem: current metrics such as ROUGE are known to be limited and to correlate poorly with human judgments. To alleviate this issue, recent work has proposed evaluation metrics which rely on question answering models to assess whether a summary contains all the relevant information in its source document. Though promising, the proposed approaches have so far failed to correlate better than ROUGE with human judgments. In this paper, we extend previous approaches and propose a unified framework, named SAFEval. In contrast to established metrics such as ROUGE or BERTScore, SAFEval does not require any ground-truth reference. Nonetheless, SAFEval substantially improves the correlation with human judgments over four evaluation dimensions (consistency, coherence, fluency, and relevance), as shown in the extensive experiments we report.
翻译:总结评价仍是一个开放的研究问题:据知,目前诸如ROUGE等指标有限,与人类判断不相干。为了缓解这一问题,最近的工作提出了评价指标,这些评价指标依靠问答模型来评估摘要是否包含其原始文件中的所有相关信息。虽然很有希望,但拟议的方法迄今没有比ROUGE更好地与人类判断联系起来。在本文件中,我们推广了以前的方法,并提出了一个统一的框架,称为SAFEval。与诸如ROUGE或BERTScore等既定指标相比,SafeEval并不要求任何地面真相参考。然而,正如我们报告的广泛实验所显示的那样,在四个评价层面(一致性、一致性、流利性和相关性)上,SafeEval大大改善了人类判断与人类判断的相关性。