Recently, there has been a growing interest in designing text generation systems from a discourse coherence perspective, e.g., modeling the interdependence between sentences. Still, recent BERT-based evaluation metrics cannot recognize coherence and fail to punish incoherent elements in system outputs. In this work, we introduce DiscoScore, a parametrized discourse metric, which uses BERT to model discourse coherence from different perspectives, driven by Centering theory. Our experiments encompass 16 non-discourse and discourse metrics, including DiscoScore and popular coherence models, evaluated on summarization and document-level machine translation (MT). We find that (i) the majority of BERT-based metrics correlate much worse with human rated coherence than early discourse metrics, invented a decade ago; (ii) the recent state-of-the-art BARTScore is weak when operated at system level -- which is particularly problematic as systems are typically compared in this manner. DiscoScore, in contrast, achieves strong system-level correlation with human ratings, not only in coherence but also in factual consistency and other aspects, and surpasses BARTScore by over 10 correlation points on average. Further, aiming to understand DiscoScore, we provide justifications to the importance of discourse coherence for evaluation metrics, and explain the superiority of one variant over another. Our code is available at \url{https://github.com/AIPHES/DiscoScore}.
翻译:最近,人们越来越有兴趣从讨论一致性的角度来设计文本生成系统,例如模拟判决之间的相互依存关系。不过,最近基于BERT的评价指标不能承认一致性,不能惩罚系统产出中的不一致要素。在这项工作中,我们引入了DiscoScore,这是一个平衡的谈话指标,它利用BERT来模拟不同观点的一致性。我们的实验包括16个非讨论和讨论指标,包括DiscoScore和大众一致性模型,对总结和文件级机器翻译(MT)进行了评估。我们发现,(一) 以BERT为基础的大多数衡量标准与十年前发明的人类评级一致性指标相比要差得多;(二) 最近的“DiscoScore”,当系统一级运行时,这一标准就非常薄弱,因为系统通常以这种方式进行比较,这特别成问题。DiscoScococreal不仅在一致性方面,而且在事实一致性和其他方面,而且超越了BARST/SCROCR的优越性标准,我们从10种相关性的角度理解了现有标准。