Robot failures in human-centered environments are inevitable. Therefore, the ability of robots to explain such failures is paramount for interacting with humans to increase trust and transparency. To achieve this skill, the main challenges addressed in this paper are I) acquiring enough data to learn a cause-effect model of the environment and II) generating causal explanations based on that model. We address I) by learning a causal Bayesian network from simulation data. Concerning II), we propose a novel method that enables robots to generate contrastive explanations upon task failures. The explanation is based on setting the failure state in contrast with the closest state that would have allowed for a successful execution. This state is found through breadth-first search and is based on success predictions from the learned causal model. We assessed our method in two different scenarios I) stacking cubes and II) dropping spheres into a container. The obtained causal models reach a sim2real accuracy of 70% and 72%, respectively. We finally show that our novel method scales over multiple tasks and allows real robots to give failure explanations like 'the upper cube was stacked too high and too far to the right of the lower cube.'
翻译:机器人在以人类为中心的环境中的机器人失败是不可避免的。 因此, 机器人解释这种失败的能力对于与人类进行互动以提高信任和透明度至关重要。 为了实现这一技能, 本文讨论的主要挑战在于: I) 获得足够的数据以学习环境的因果关系模型; II) 根据该模型产生因果关系解释。 我们处理 I) 通过从模拟数据中学习一个因果贝叶斯网络。 关于II, 我们提出了一个新颖的方法, 使机器人能够对任务失败作出对比性解释。 解释的基础在于将失败状态与允许成功执行的最接近的状态相比较。 这个状态是通过广度第一搜索找到的, 并以从所学的因果模型中获得的成功预测为基础。 我们评估了我们的方法, 在两种不同的情景中, 一是堆叠立立立立立立立立方体, 二是将球投入容器。 获得的因果模型分别达到70% 和 72% 。 我们最后显示, 我们的新方法在多项任务上的比例, 允许真正的机器人给出失败解释, 比如“ 上方体堆积过低方体右方” 。