Human ratings are one of the most prevalent methods to evaluate the performance of natural language processing algorithms. Similarly, it is common to measure the quality of sentences generated by a natural language generation model using human raters. In this paper, we argue for exploring the use of subjective evaluations within the process of training language generation models in a multi-task learning setting. As a case study, we use a crowd-authored dialogue corpus to fine-tune six different language generation models. Two of these models incorporate multi-task learning and use subjective ratings of lines as part of an explicit learning goal. A human evaluation of the generated dialogue lines reveals that utterances generated by the multi-tasking models were subjectively rated as the most typical, most moving the conversation forward, and least offensive. Based on these promising first results, we discuss future research directions for incorporating subjective human evaluations into language model training and to hence keep the human user in the loop during the development process.
翻译:人类评级是评价自然语言处理算法绩效的最常用方法之一。同样,衡量自然语言生成模型使用人类评级器生成的判决质量也是常见的。在本文中,我们主张在多任务学习环境中培训语言生成模型的过程中探索使用主观评价。作为案例研究,我们使用人群对话文集对六种不同语言生成模型进行微调。其中两种模型包含多任务学习,并将对行的主观评级作为明确学习目标的一部分。人对生成的对话行的评价显示,多任务模式生成的语句被主观评为最典型的、最能推动对话的、最不具有攻击性的。基于这些有希望的初步成果,我们讨论了将主观人类评价纳入语言模型培训的未来研究方向,从而让人类用户在开发过程中保持循环。