In order for conversational AI systems to hold more natural and broad-ranging conversations, they will require much more commonsense, including the ability to identify unstated presumptions of their conversational partners. For example, in the command "If it snows at night then wake me up early because I don't want to be late for work" the speaker relies on commonsense reasoning of the listener to infer the implicit presumption that they wish to be woken only if it snows enough to cause traffic slowdowns. We consider here the problem of understanding such imprecisely stated natural language commands given in the form of "if-(state), then-(action), because-(goal)" statements. More precisely, we consider the problem of identifying the unstated presumptions of the speaker that allow the requested action to achieve the desired goal from the given state (perhaps elaborated by making the implicit presumptions explicit). We release a benchmark data set for this task, collected from humans and annotated with commonsense presumptions. We present a neuro-symbolic theorem prover that extracts multi-hop reasoning chains, and apply it to this problem. Furthermore, to accommodate the reality that current AI commonsense systems lack full coverage, we also present an interactive conversational framework built on our neuro-symbolic system, that conversationally evokes commonsense knowledge from humans to complete its reasoning chains.
翻译:为使对话的AI系统更自然和广泛地举行更自然和广泛的对话,它们需要更普通的认知,包括能够识别其对话伙伴的未经声明的推定。例如,在“如果晚上下雪,然后因为我不想工作迟到而使我早起”的指令中,演讲人依靠听众的常识推理推断他们希望被唤醒的隐含假设,但前提是它们下雪足够造成交通减速。我们在这里考虑的是理解这种以“如果(状态),然后(行动),因为(目标)”的形式给出的不确切的自然语言指令的问题。更准确地说,我们考虑的是确定发言者的未经声明的假设问题,允许要求采取行动从特定状态实现预期目标(也许通过明确隐含的假设来阐述)。我们为这项任务发布了一套基准数据,从人类收集的数据,用常识假设作注释。我们提出了一个神经-符号证明,从多感思考链中提取完整的推理,然后(动作)因为(目标)“目标)”声明。更准确地说,我们考虑确定发言者的未经声明的假设,而要求采取行动,从特定的对话体系中实现我们所建立的一个常识度。此外,我们所创的神经的理论系统也缺乏一个常识。