We propose CX-ToM, short for counterfactual explanations with theory-of mind, a new explainable AI (XAI) framework for explaining decisions made by a deep convolutional neural network (CNN). In contrast to the current methods in XAI that generate explanations as a single shot response, we pose explanation as an iterative communication process, i.e. dialog, between the machine and human user. More concretely, our CX-ToM framework generates sequence of explanations in a dialog by mediating the differences between the minds of machine and human user. To do this, we use Theory of Mind (ToM) which helps us in explicitly modeling human's intention, machine's mind as inferred by the human as well as human's mind as inferred by the machine. Moreover, most state-of-the-art XAI frameworks provide attention (or heat map) based explanations. In our work, we show that these attention based explanations are not sufficient for increasing human trust in the underlying CNN model. In CX-ToM, we instead use counterfactual explanations called fault-lines which we define as follows: given an input image I for which a CNN classification model M predicts class c_pred, a fault-line identifies the minimal semantic-level features (e.g., stripes on zebra, pointed ears of dog), referred to as explainable concepts, that need to be added to or deleted from I in order to alter the classification category of I by M to another specified class c_alt. We argue that, due to the iterative, conceptual and counterfactual nature of CX-ToM explanations, our framework is practical and more natural for both expert and non-expert users to understand the internal workings of complex deep learning models. Extensive quantitative and qualitative experiments verify our hypotheses, demonstrating that our CX-ToM significantly outperforms the state-of-the-art explainable AI models.
翻译:我们建议 CX- ToM, 短于 CX- ToM, 短于 反事实解释, 短于 理论思维, 新的 AI (XAI ) 框架, 用于解释由深层神经神经神经网络(CNN)做出的决定。 与 XAI 中目前作为单一镜头反应产生解释的方法相比, 我们提出解释, 作为机器和人类用户之间一个迭接的沟通进程, 即对话。 更具体地说, 我们的 CX- ToM 框架通过调解机器和人类用户之间思想差异来在对话中产生解释顺序解释。 为了做到这一点, 我们使用Mind(TOM) 的理论解释, 帮助我们明确模拟人类的真实意图, 由人类和人类的大脑神经网络所推断的机器思维。 此外, 最先进的 XAI 框架提供关注( 或热图) 解释。 我们的工作显示, 这些基于关注的解释不足以增加人对 IMNIS 模型的信任。 在 CX- ToM 中, 我们使用反事实解释 定义的错误定义为以下定义的 : 由 C- deal- deal- deal exal exal exeral exeral oral ex ex ex der der ex der der der 、 ex ex laut the laveal laut the laut the des des the ex ex laut the ex ex ex ex ex laut the ex ex ex ex ex ex ex lautust lautuseral des des des des the des laut lautus lacre to laut the laut laut the laut the sild laut the laut laut lauts lauts lauts lauts laut lauts lauts lauts lauts to lauts to lauts to lauts to laut lauts to lator to lauts to lauts to lauts to lac.