The quality of texts generated by natural language generation (NLG) systems is hard to measure automatically. Conventional reference-based metrics, such as BLEU and ROUGE, have been shown to have relatively low correlation with human judgments, especially for tasks that require creativity and diversity. Recent studies suggest using large language models (LLMs) as reference-free metrics for NLG evaluation, which have the benefit of being applicable to new tasks that lack human references. However, these LLM-based evaluators still have lower human correspondence than medium-size neural evaluators. In this work, we present G-Eval, a framework of using large language models with chain-of-thoughts (CoT) and a form-filling paradigm, to assess the quality of NLG outputs. We experiment with two generation tasks, text summarization and dialogue generation. We show that G-Eval with GPT-4 as the backbone model achieves a Spearman correlation of 0.514 with human on summarization task, outperforming all previous methods by a large margin. We also propose preliminary analysis on the behavior of LLM-based evaluators, and highlight the potential issue of LLM-based evaluators having a bias towards the LLM-generated texts.
翻译:自然语言生成(NLG)系统生成的文本质量难以自动衡量。 传统的基于参考文献的度量标准(如 BLEU 和 ROUGE)已被证明在需要创造力和多样性的任务中与人类判断存在较低的相关性。 最近的研究表明,使用大型语言模型(LLM)作为基于无参考文献的 NLG 评估量度,其好处是适用于缺乏人类参考的新任务。 然而,这些基于 LLM 的评估器仍然比中等大小神经评估器具有较低的人类对应性。 本文介绍了 G-Eval,一种使用具有“思维链(CoT)”和“表单填充”范例的大型语言模型框架,以评估 NLG 输出的质量。 我们进行了两个生成任务的实验,即文本摘要和对话生成。 我们展示了使用 GPT-4 作为骨干模型的 G-Eval 在摘要任务上实现了与人类的 Spearman 相关性为 0.514,远远优于以前的所有方法。 我们还对基于 LLM 的评估器的行为提出了初步分析,并强调了基于 LLM 的评估器可能存在偏向于 LLM 生成文本的潜在问题。