An important aspect of developing dialogue systems is how to evaluate and compare the performance of different systems. Existing automatic evaluation metrics are based on turn-level quality evaluation and use average scores for system-level comparison. In this paper, we propose to measure the performance of a dialogue system by computing the distribution-wise distance between its generated conversations and real-world conversations. Specifically, two distribution-wise metrics, FBD and PRD, are developed and evaluated. Experiments on several dialogue corpora show that our proposed metrics correlate better with human judgments than existing metrics.
翻译:发展对话系统的一个重要方面是如何评价和比较不同系统的业绩。现有的自动评价衡量标准基于转级质量评价,并使用系统一级平均评分。在本文中,我们建议通过计算对话与现实世界对话之间的分布距离来衡量对话系统的业绩。具体地说,开发和评价了两种分布衡量标准,即FBD和PRD。一些对话的实验显示,我们提议的衡量标准与人类判断比现有衡量标准更相关。