Although many machine learning methods, especially from the field of deep learning, have been instrumental in addressing challenges within robotic applications, we cannot take full advantage of such methods before these can provide performance and safety guarantees. The lack of trust that impedes the use of these methods mainly stems from a lack of human understanding of what exactly machine learning models have learned, and how robust their behaviour is. This is the problem the field of explainable artificial intelligence aims to solve. Based on insights from the social sciences, we know that humans prefer contrastive explanations, i.e.\ explanations answering the hypothetical question "what if?". In this paper, we show that linear model trees are capable of producing answers to such questions, so-called counterfactual explanations, for robotic systems, including in the case of multiple, continuous inputs and outputs. We demonstrate the use of this method to produce counterfactual explanations for two robotic applications. Additionally, we explore the issue of infeasibility, which is of particular interest in systems governed by the laws of physics.
翻译:尽管许多机器学习方法,特别是深层学习领域的方法,在应对机器人应用中的挑战方面发挥了作用,但我们不能充分利用这些方法来提供性能和安全保障;缺乏信任妨碍使用这些方法的主要原因是人类对机器学习模式所学到的确切内容及其行为有多强力缺乏了解。这是可以解释的人工智能领域要解决的问题。根据社会科学的深入了解,我们知道人类更喜欢反向解释,即对假设的“如果什么?”问题作出解释。在本文中,我们表明线性模型树能够为机器人系统提出这些问题的答案,即所谓的反事实解释,包括多重、连续的投入和产出。我们证明使用这种方法为两种机器人应用提出反事实解释。此外,我们探索了不可行性问题,这是物理学法则所规范的系统特别感兴趣的。