The lack of reliable automatic evaluation metrics is a major impediment to the development of open-domain dialogue systems. Various reference-based metrics have been proposed to calculate a score between a predicted response and a small set of references. However, these metrics show unsatisfactory correlations with human judgments. For a reference-based metric, its reliability mainly depends on two factors: its ability to measure the similarity between the predicted response and the reference response, as well as the reliability of the given reference set. Yet, there are few discussions on the latter. Our work attempts to fill this vacancy. We first clarify an assumption on reference-based metrics that, if more high-quality references are added into the reference set, the reliability of the metric will increase. Next, we present REAM$\sharp$: an enhancement approach to Reference-based EvAluation Metrics for open-domain dialogue systems. A prediction model is designed to estimate the reliability of the given reference set. We show how its predicted results can be helpful to augment the reference set, and thus improve the reliability of the metric. Experiments validate both the effectiveness of our prediction model and that the reliability of reference-based metrics improves with the augmented reference sets.
翻译:缺乏可靠的自动评价指标是开发开放域对话系统的主要障碍。提出了各种参考指标,以计算预测响应和少量参考的得分。然而,这些指标显示与人类判断的关联性不尽人意。对于基于参考的衡量标准,其可靠性主要取决于两个因素:衡量预测响应和参考响应的相似性的能力,以及给定参考集的可靠性。然而,关于后者的讨论很少。我们努力填补这一空缺。我们首先澄清了基于参考指标的假设,即如果在参考集中增加更高质量的引用,该基准的可靠性将会提高。接下来,我们提出REAM$\ sharp$:基于参考的EvAluation Metrics用于开放域对话系统的一种强化方法。一个预测模型旨在估计给定参考集的可靠性。我们指出,其预测结果如何有助于扩大参考集,从而改进该基准的可靠性。实验验证了我们的预测模型的有效性,并改进了参考基准的可靠性。