The growing need for in-depth analysis of predictive models leads to a series of new methods for explaining their local and global properties. Which of these methods is the best? It turns out that this is an ill-posed question. One cannot sufficiently explain a black-box machine learning model using a single method that gives only one perspective. Isolated explanations are prone to misunderstanding, leading to wrong or simplistic reasoning. This problem is known as the Rashomon effect and refers to diverse, even contradictory, interpretations of the same phenomenon. Surprisingly, most methods developed for explainable and responsible machine learning focus on a single-aspect of the model behavior. In contrast, we showcase the problem of explainability as an interactive and sequential analysis of a model. This paper proposes how different Explanatory Model Analysis (EMA) methods complement each other and discusses why it is essential to juxtapose them. The introduced process of Interactive EMA (IEMA) derives from the algorithmic side of explainable machine learning and aims to embrace ideas developed in cognitive sciences. We formalize the grammar of IEMA to describe potential human-model dialogues. It is implemented in a widely used human-centered open-source software framework that adopts interactivity, customizability and automation as its main traits. We conduct a user study to evaluate the usefulness of IEMA, which indicates that an interactive sequential analysis of a model increases the performance and confidence of human decision making.
翻译:对预测模型进行深入分析的必要性日益增加,导致了一系列解释其本地和全球特性的新方法。其中哪些方法是最佳的?结果显示这是一个错误的问题。无法充分解释黑盒机器学习模型使用单一方法只提供一种视角的单一方法如何解释黑盒机器学习模型。孤立的解释容易产生误解,导致错误或简单推理。这个问题被称为拉索蒙效应,并提到对同一现象的不同、甚至相互矛盾的解释。令人惊讶的是,为解释和负责的机器学习而开发的大多数方法都侧重于模型行为的单一部分。相比之下,我们将解释问题作为模型的交互和顺序分析来展示。本文提出不同的解释模型分析方法如何相互补充,并讨论为什么必须将它们相提并进。交互式EMMA(IEMMA)的引入过程来自解释机器学习的算法方面,目的是接受在认知科学中发展出的各种想法。我们正式地将IEMA的拼法用于描述潜在的人类模型对话。相比之下,我们将可解释性的问题展示为对模型进行互动和顺序分析的问题。本文提出不同的解释模型分析方法是如何相互补充的,并讨论了如何将用户间动态分析作为一种可操作性分析的基础。我们采用了一种可操作性分析。