The proliferation of automatic faithfulness metrics for summarization has produced a need for benchmarks to evaluate them. While existing benchmarks measure the correlation with human judgements of faithfulness on model-generated summaries, they are insufficient for diagnosing whether metrics are: 1) consistent, i.e., decrease as errors are introduced into a summary, 2) effective on human-written texts, and 3) sensitive to different error types (as summaries can contain multiple errors). To address these needs, we present a benchmark of unfaithful minimal pairs (BUMP), a dataset of 889 human-written, minimally different summary pairs, where a single error (from an ontology of 7 types) is introduced to a summary from the CNN/DailyMail dataset to produce an unfaithful summary. We find BUMP complements existing benchmarks in a number of ways: 1) the summaries in BUMP are harder to discriminate and less probable under SOTA summarization models, 2) BUMP enables measuring the consistency of metrics, and reveals that the most discriminative metrics tend not to be the most consistent, 3) BUMP enables the measurement of metrics' performance on individual error types and highlights areas of weakness for future work.
翻译:用于总结的自动忠实度量的激增产生了评估基准的必要性。虽然现有基准衡量了模型生成摘要的人类忠诚判断与人类判断的相关性,但不足以诊断指标是否:(1) 一致,即由于在摘要中引入错误而减少,(2) 对人类写成文本有效,(3) 敏感于不同错误类型(因为摘要可能包含多重错误)。为了满足这些需求,我们提出了一个不真实的最低限度对子(BUMP)基准,这是一个由889人编写的、最低限度不同的摘要对子数据集组成的数据集,其中在CNN/DailyMail数据集的概要中引入单一错误(从7种形式上),以产生不忠实的摘要。我们发现BUMP以若干方式补充现有基准:1) BUMP摘要更难区分,而且根据SOTA总和模型,2) BUMP能够衡量指标的一致性,并显示最有区别的尺度往往不是最一致的,3) BUMP能够衡量个人工作领域弱点和今后工作弱点。