The growing need for in-depth analysis of predictive models leads to a series of new methods for explaining their local and global properties. Which of these methods is the best? It turns out that this is an ill-posed question. One cannot sufficiently explain a black-box machine learning model using a single method that gives only one perspective. Isolated explanations are prone to misunderstanding, which inevitably leads to wrong or simplistic reasoning. This problem is known as the Rashomon effect and refers to diverse, even contradictory interpretations of the same phenomenon. Surprisingly, the majority of methods developed for explainable machine learning focus on a single aspect of the model behavior. In contrast, we showcase the problem of explainability as an interactive and sequential analysis of a model. This paper presents how different Explanatory Model Analysis (EMA) methods complement each other and why it is essential to juxtapose them together. The introduced process of Interactive EMA (IEMA) derives from the algorithmic side of explainable machine learning and aims to embrace ideas developed in cognitive sciences. We formalize the grammar of IEMA to describe potential human-model dialogues. IEMA is implemented in the human-centered framework that adopts interactivity, customizability and automation as its main traits. Combined, these methods enhance the responsible approach to predictive modeling.
翻译:对预测模型进行深入分析的必要性日益增加,导致了一系列解释其本地和全球特性的新方法。这些方法中哪些是最佳的?结果发现这是一个错误的问题。人们无法充分解释使用单一方法只给出一个视角的黑盒机器学习模型。孤立的解释容易产生误解,这不可避免地导致错误或简单推理。这个问题被称为Rashomon效应,并提到对同一现象的多种解释,甚至相互矛盾的解释。令人惊讶的是,为解释机器学习而开发的大多数方法侧重于模型行为的一个单一方面。相比之下,我们将解释问题作为模型的交互和顺序分析来展示。本文介绍了不同的解释模型分析方法是如何相互补充的,以及为什么必须把它们并列在一起。交互式EMMA(IMA)的引入过程来自解释机器学习的算法方面,目的是接受在认知科学中形成的各种想法。我们正式确定了IEMA的拼法,以描述潜在的人类模型对话。相比之下,我们将解释问题作为模型的互动和顺序分析的一个模型分析问题。本文介绍了不同的解释模型分析方法相互补充,说明了不同的模型分析方法是如何互相补充的,为什么必须把它们结合起来。交互式的EMA进程来自解释机器学习的算法方面,目的是接受在认知科学中发展的概念学习和目的。我们正式确定了国际模型对话中描述潜在的人类模型对话。IEMA是综合的可变式的,这是一种综合的内化方法,采用这些可加的内联性的方法。