Despite growing interest in natural language generation (NLG) models that produce diverse outputs, there is currently no principled method for evaluating the diversity of an NLG system. In this work, we propose a framework for evaluating diversity metrics. The framework measures the correlation between a proposed diversity metric and a diversity parameter, a single parameter that controls some aspect of diversity in generated text. For example, a diversity parameter might be a binary variable used to instruct crowdsourcing workers to generate text with either low or high content diversity. We demonstrate the utility of our framework by: (a) establishing best practices for eliciting diversity judgments from humans, (b) showing that humans substantially outperform automatic metrics in estimating content diversity, and (c) demonstrating that existing methods for controlling diversity by tuning a "decoding parameter" mostly affect form but not meaning. Our framework can advance the understanding of different diversity metrics, an essential step on the road towards better NLG systems.
翻译:尽管人们对产生不同产出的自然语言生成模式(NLG)的兴趣日益浓厚,但目前没有评价NLG系统多样性的原则性方法。在这项工作中,我们提议了一个评估多样性指标的框架。框架衡量拟议多样性指标与多样性参数(控制生成文本中多样性某些方面的单一参数)之间的相互关系。例如,多样性参数可能是一个二进制变量,用于指示众包工人以低内容或高内容多样性生成文本。我们通过以下方式展示了我们框架的效用:(a) 建立从人类获取多样性判断的最佳做法,(b) 显示人类在估计内容多样性方面大大超过自动衡量标准,以及(c) 表明通过调整“解码参数”控制多样性的现有方法主要影响形式,但并不意味着。我们的框架可以增进对不同多样性指标的理解,这是通往更好的NLG系统道路上的重要一步。