This paper presents a model of contrastive explanation using structural casual models. The topic of causal explanation in artificial intelligence has gathered interest in recent years as researchers and practitioners aim to increase trust and understanding of intelligent decision-making. While different sub-fields of artificial intelligence have looked into this problem with a sub-field-specific view, there are few models that aim to capture explanation more generally. One general model is based on structural causal models. It defines an explanation as a fact that, if found to be true, would constitute an actual cause of a specific event. However, research in philosophy and social sciences shows that explanations are contrastive: that is, when people ask for an explanation of an event -- the fact -- they (sometimes implicitly) are asking for an explanation relative to some contrast case; that is, "Why P rather than Q?". In this paper, we extend the structural causal model approach to define two complementary notions of contrastive explanation, and demonstrate them on two classical problems in artificial intelligence: classification and planning. We believe that this model can help researchers in subfields of artificial intelligence to better understand contrastive explanation.
翻译:本文以结构随意模型为对比性解释模式。 人工智能的因果解释专题近年来引起了人们的兴趣,因为研究人员和从业者都希望增强对智能决策的信任和理解。 不同的人工智能子领域都以一个子领域的观点来研究这个问题,但很少有模型可以更全面地了解解释。 一种一般模型以结构性因果模型为基础。 它将解释定义为一个事实,如果发现事实属实,就会构成具体事件的实际原因。 然而,哲学和社会科学的研究表明,解释是对比性的:即当人们要求解释某一事件时 -- -- 事实 -- -- 他们(有时隐含地)要求解释与某一对比案例有关的解释; 也就是说, “为什么是P而不是Q? ” 在本文中,我们扩展结构性因果模型方法来界定两个互补的因果解释概念,并展示人工智能的两个典型问题:分类和规划。 我们相信,这一模型可以帮助人工智能子领域的研究人员更好地了解对比性解释。