As robots become increasingly complex, they must explain their behaviors to gain trust and acceptance. However, it may be difficult through verbal explanation alone to fully convey information about past behavior, especially regarding objects no longer present due to robots' or humans' actions. Humans often try to physically mimic past movements to accompany verbal explanations. Inspired by this human-human interaction, we describe the technical implementation of a system for past behavior replay for robots in this tool paper. Specifically, we used Behavior Trees to encode and separate robot behaviors, and schemaless MongoDB to structurally store and query the underlying sensor data and joint control messages for future replay. Our approach generalizes to different types of replays, including both manipulation and navigation replay, and visual (i.e., augmented reality (AR)) and auditory replay. Additionally, we briefly summarize a user study to further provide empirical evidence of its effectiveness and efficiency. Sample code and instructions are available on GitHub at https://github.com/umhan35/robot-behavior-replay.
翻译:随着机器人变得日益复杂,他们必须解释他们的行为,以获得信任和接受。然而,单靠口头解释可能很难充分传达关于过去行为的信息,特别是因机器人或人类行为而不再存在的物体的信息。人类往往试图实际模仿过去的运动,以配合口头解释。受这种人与人的相互作用的启发,我们在本工具文件中描述了过去机器人行为重演系统的技术实施情况。具体地说,我们使用“行为树”来编码和分别的机器人行为,而没有计划的MongoDB则在结构上储存和查询潜在的传感器数据和联合控制信息,以便将来重播。我们的方法概括了不同类型的重播,包括操纵和导航重播,以及视觉(即,增强现实(AR))和听力重播。此外,我们简要概述了一项用户研究,以进一步提供其有效性和效率的经验证据。在 GitHub https://github.com/umhan35/robot-behavior-replay上提供了抽样代码和指示。