Open-ended human learning and information-seeking are increasingly mediated by digital assistants. However, such systems often ignore the user's pre-existing knowledge. Assuming a correlation between engagement and user responses such as "liking" messages or asking followup questions, we design a Wizard-of-Oz dialog task that tests the hypothesis that engagement increases when users are presented with facts related to what they know. Through crowd-sourcing of this experiment, we collect and release 14K dialogs (181K utterances) where users and assistants converse about geographic topics like geopolitical entities and locations. This dataset is annotated with pre-existing user knowledge, message-level dialog acts, grounding to Wikipedia, and user reactions to messages. Responses using a user's prior knowledge increase engagement. We incorporate this knowledge into a multi-task model that reproduces human assistant policies and improves over a BERT content model by 13 mean reciprocal rank points.
翻译:开放型人类学习和信息搜索越来越多地由数字助理进行。然而,这类系统往往忽视用户先前已经存在的知识。假设接触和用户回应(如“倾斜”信息或询问后续问题)之间的相关性,我们设计了一个“奥兹向导”对话任务,以测试当用户看到与其所知道的情况有关的事实时参与增加的假设。通过这一实验的众包,我们收集和发布14K对话(181K语句),用户和助理在其中对地缘政治实体和地点等地理议题进行对立。该数据集附加了已有的用户知识、信息级对话行为、维基百科的基础和用户对信息的反应。利用用户先前的知识增加参与的反应。我们将这一知识纳入一个多任务模式,复制人类助理政策,并比BERT内容模型改进13个意味着对等级点。