One of the challenges faced by conversational agents is their inability to identify unstated presumptions of their users' commands, a task trivial for humans due to their common sense. In this paper, we propose a zero-shot commonsense reasoning system for conversational agents in an attempt to achieve this. Our reasoner uncovers unstated presumptions from user commands satisfying a general template of if-(state), then-(action), because-(goal). Our reasoner uses a state-of-the-art transformer-based generative commonsense knowledge base (KB) as its source of background knowledge for reasoning. We propose a novel and iterative knowledge query mechanism to extract multi-hop reasoning chains from the neural KB which uses symbolic logic rules to significantly reduce the search space. Similar to any KBs gathered to date, our commonsense KB is prone to missing knowledge. Therefore, we propose to conversationally elicit the missing knowledge from human users with our novel dynamic question generation strategy, which generates and presents contextualized queries to human users. We evaluate the model with a user study with human users that achieves a 35% higher success rate compared to SOTA.
翻译:对话代理所面临的挑战之一是他们无法识别其用户指令的未经声明的推定,这是人类由于常识而面临的一个微不足道的任务。在本文中,我们提议为对话代理者建立一个零光常识推理系统,以图实现这一目标。我们的理性者从用户指令中发现了未经声明的推定,满足了如果(状态)和(动作)的原因(目标)的一般模板。我们的理性者使用基于最先进的变压器的基因化常识库(KB)作为推理背景知识的来源。我们建议建立一个新颖和反复的知识查询机制,从神经KB中提取多动脉推理链,使用象征性逻辑规则大幅度减少搜索空间。与迄今为止收集的任何KB相似,我们的常识KB很容易丢失知识。因此,我们提议通过我们的新动态问题生成策略,从人类用户那里获取缺失的知识,从而产生和向人类用户提出符合背景的查询。我们用用户对模型进行了评估,该模型的用户取得了比SOTA更高的35%的成功率。