Conversational surveys, where an agent asks open-ended questions through natural language interfaces, offer a new way to collect information from people. A good follow-up question in a conversational survey prompts high-quality information and delivers engaging experiences. However, generating high-quality follow-up questions on the fly is a non-trivial task. The agent needs to understand the diverse and complex participant responses, adhere to the survey goal, and generate clear and coherent questions. In this study, we propose a knowledge-driven follow-up question generation framework. The framework combines a knowledge selection module to identify salient topics in participants' responses and a generative model guided by selected knowledge entity-relation pairs. To investigate the effectiveness of the proposed framework, we build a new dataset for open-domain follow-up question generation and present a new set of reference-free evaluation metrics based on Gricean Maxim. Our experiments demonstrate that our framework outperforms a GPT-based baseline in both objective evaluation and human-expert evaluation.
翻译:通过自然语言界面提出不限名额问题的相互交流调查,为收集人们的信息提供了一种新的方式。在谈话性调查中,一个良好的后续问题引发了高质量的信息,并提供了吸引人的经验。然而,在飞行上产生高质量的后续问题是一项非三重任务。代理人需要了解不同和复杂的参与者答复,坚持调查目标,并产生明确和一致的问题。在本研究中,我们提出了一个知识驱动的后续问题生成框架。框架将知识选择模块,以确定参与者答复中的突出主题和由选定的知识实体关系组合指导的基因化模型结合起来。为了调查拟议框架的有效性,我们为开放式后续问题生成建立一个新的数据集,并推出一套基于Grikean Maxim的新的无参考评价指标。我们的实验表明,我们的框架在客观评估和人类专家评估中都超越了基于GPT的基线。