The quality of texts generated by natural language generation (NLG) systems is hard to measure automatically. Conventional reference-based metrics, such as BLEU and ROUGE, have been shown to have relatively low correlation with human judgments, especially for tasks that require creativity and diversity. Recent studies suggest using large language models (LLMs) as reference-free metrics for NLG evaluation, which have the benefit of being applicable to new tasks that lack human references. However, these LLM-based evaluators still have lower human correspondence than medium-size neural evaluators. In this work, we present GPTEval, a framework of using large language models with chain-of-thoughts (CoT) and a form-filling paradigm, to assess the quality of NLG outputs. We experiment with two generation tasks, text summarization and dialogue generation. We show that GPTEval with GPT-4 as the backbone model achieves a Spearman correlation of 0.514 with human on summarization task, outperforming all previous methods by a large margin. We also propose preliminary analysis on the behavior of LLM-based evaluators, and highlight the potential issue of LLM-based evaluators having a bias towards the LLM-generated texts.
翻译:难以自动计量自然语言生成 (NLG) 系统生成文本的质量。传统的基于参考的度量方法,如BLEU和ROUGE,与人类判断的相关性相对较低,尤其是对于需要创造性和多样性的任务。最近的研究表明,使用大型语言模型 (LLMs) 作为无参考的 NLG 评估方法,具有适用于缺乏人工参考的新任务的好处。然而,这些基于 LLM 的评估器仍然比中等规模的神经评估器与人类对应度低。 本文介绍了 GPTEval,采用基于思路链 (CoT) 和表格填充范式的大型语言模型框架,用于评估 NLG 输出的质量。我们针对两个生成任务,文本摘要和对话生成进行了实验。我们展示了使用 GPT-4 作为骨干模型的 GPTEval 在摘要任务中达到了与人类的0.514的Spearman相关性,大幅超过了所有先前的方法。我们还对基于 LLM 的评估器的行为进行了初步分析,并强调了基于 LLM 的评估器可能具有偏向于 LLM 生成的文本的潜在问题。