Recent model-based reference-free metrics for open-domain dialogue evaluation exhibit promising correlations with human judgment. However, they either perform turn-level evaluation or look at a single dialogue quality dimension. One would expect a good evaluation metric to assess multiple quality dimensions at the dialogue level. To this end, we are motivated to propose a multi-dimensional dialogue-level metric, which consists of three sub-metrics with each targeting a specific dimension. The sub-metrics are trained with novel self-supervised objectives and exhibit strong correlations with human judgment for their respective dimensions. Moreover, we explore two approaches to combine the sub-metrics: metric ensemble and multitask learning. Both approaches yield a holistic metric that significantly outperforms individual sub-metrics. Compared to the existing state-of-the-art metric, the combined metrics achieve around 16% relative improvement on average across three high-quality dialogue-level evaluation benchmarks.
翻译:最近以模型为基础的开放域对话评价的无参考度量指标与人类判断具有很有希望的关联性。然而,它们要么进行翻转级评价,要么看单一对话质量层面。人们期望有一个好的评价度指标,以评估对话层面的多重质量层面。为此,我们有志提出一个多维对话层面指标,由三个子计量组成,每个子计量以特定层面为目标。次计量受过新颖的自我监督目标培训,并显示出与人类判断各自层面的强烈关联。此外,我们探索了两种方法,将次计量结合起来:指标共和多任务学习。两种方法都产生了一个大大优于个体次计量的整体度指标。与现有最新指标相比,综合指标在三个高质量对话层面的评价基准中平均达到16%的相对改善。