In situations such as habitat construction, station inspection, or cooperative exploration, incorrect assumptions about the environment or task across the team could lead to mission failure. Thus it is important to resolve any ambiguity about the mission between teammates before embarking on a commanded task. The safeguards guaranteed by formal methods can be used to synthesize correct-by-construction reactive controllers for a robot using Linear Temporal Logic. If a robot fails to synthesize a controller given an instruction, it is clear that there exists a logical inconsistency in the environmental assumptions and/or described interactions. These specifications however are typically crafted in a language unique to the verification framework, requiring the human collaborator to be fluent in the software tool used to construct it. Furthermore, if the controller fails to synthesize, it may prove difficult to easily repair the specification. Language is a natural medium to generate these specifications using modern symbol grounding techniques. Using language empowers non-expert humans to describe tasks to robot teammates while retaining the benefits of formal verification. Additionally, dialogue could be used to inform robots about the environment and/or resolve any ambiguities before mission execution. This paper introduces an architecture for natural language interaction using a symbolic representation that informs the construction of a specification in Linear Temporal Logic. The novel aspect of this approach is that it provides a mechanism for resolving synthesis failure by hypothesizing corrections to the specification that are verified through human-robot dialogue. Experiments involving the proposed architecture are demonstrated using a simulation of an Astrobee robot navigating in the International Space Station.
翻译:在生境建设、站点检查或协作探索等情况下,团队之间对于环境或任务的错误假设可能导致任务失败。因此,在执行指令任务之前消除团队之间的任何模糊性非常重要。通过形式化方法保证的保障可以使用线性时间逻辑为机器人合成正确的反应控制器。如果机器人无法根据指令合成控制器,则显然在环境假设和/或描述的交互方面存在逻辑上的不一致。然而,这些规范通常是使用验证框架中唯一的语言来设计的,需要人类合作者精通用于构造它的软件工具。此外,如果控制器无法合成,修复规范可能会变得困难。语言是一种自然的媒介,可以使用现代符号接地技术生成这些规范。使用语言使非专业人士能够向机器人团队描述任务,同时保留形式验证的好处。此外,在任务执行之前,对话可以用于告知机器人关于环境的信息和/或消除任何模糊性。本文介绍了一种使用符号表示的自然语言交互架构,该架构通知线性时间逻辑中规范的构造。该方法的新颖之处在于,它提供了一种通过人机对话假设规范更正的机制以解决合成失败的问题。使用Astrobee机器人在国际空间站中的模拟演示了所提出的架构的实验。