Artificial Intelligence (AI) has come to prominence as one of the major components of our society, with applications in most aspects of our lives. In this field, complex and highly nonlinear machine learning models such as ensemble models, deep neural networks, and Support Vector Machines have consistently shown remarkable accuracy in solving complex tasks. Although accurate, AI models often are "black boxes" which we are not able to understand. Relying on these models has a multifaceted impact and raises significant concerns about their transparency. Applications in sensitive and critical domains are a strong motivational factor in trying to understand the behavior of black boxes. We propose to address this issue by providing an interpretable layer on top of black box models by aggregating "local" explanations. We present GLocalX, a "local-first" model agnostic explanation method. Starting from local explanations expressed in form of local decision rules, GLocalX iteratively generalizes them into global explanations by hierarchically aggregating them. Our goal is to learn accurate yet simple interpretable models to emulate the given black box, and, if possible, replace it entirely. We validate GLocalX in a set of experiments in standard and constrained settings with limited or no access to either data or local explanations. Experiments show that GLocalX is able to accurately emulate several models with simple and small models, reaching state-of-the-art performance against natively global solutions. Our findings show how it is often possible to achieve a high level of both accuracy and comprehensibility of classification models, even in complex domains with high-dimensional data, without necessarily trading one property for the other. This is a key requirement for a trustworthy AI, necessary for adoption in high-stakes decision making applications.
翻译:人工智能(AI)已成为我们社会的主要组成部分之一,成为我们生活的大多数方面的应用。在这个领域,复杂且高度非线性机器学习模型,如连字符模型、深神经网络和矢量机支持等,在解决复杂任务时始终表现出惊人的准确性。虽然准确性,但AI模型往往是我们无法理解的“黑盒 ” 。依靠这些模型会产生多方面的影响,并引起对其透明度的极大关注。在敏感和关键领域的应用是试图理解黑盒行为的一个非常复杂的激励因素。我们提议通过集成“本地”解释,在黑盒模型顶部提供一个可解释的准确性格。我们介绍GLotelX,“当地一级”模型在解决复杂任务时一贯性解释。从以当地决策规则的形式表达的地方解释开始,GlocalX反复概括这些模型成为全球解释的缩略图。我们的目标是学习准确而简单易解的模型,以便效仿黑盒的行为,如果可能的话,则完全取代它。我们经常在黑盒上提供一个可解释性的域域域域域域域,我们用一个可解释性模型来进行一个可解释的精确性实验,在标准和高度模型中进行。我们的标准和高度的模型, 显示一些数据显示,在标准和高度的模型中, 显示高度的实验性能显示高度的状态。我们的标准和高度的实验性能显示高度的状态。我们的标准和高度的模型。