Interpretability is a pressing issue for decision systems. Many post hoc methods have been proposed to explain the predictions of a single machine learning model. However, business processes and decision systems are rarely centered around a unique model. These systems combine multiple models that produce key predictions, and then apply decision rules to generate the final decision. To explain such decisions, we propose the Semi-Model-Agnostic Contextual Explainer (SMACE), a new interpretability method that combines a geometric approach for decision rules with existing interpretability methods for machine learning models to generate an intuitive feature ranking tailored to the end user. We show that established model-agnostic approaches produce poor results on tabular data in this setting, in particular giving the same importance to several features, whereas SMACE can rank them in a meaningful way.
翻译:解释性是决策系统的一个紧迫问题。 许多后临时方法已被提出来解释单一机器学习模型的预测。 但是,业务流程和决策系统很少以独特的模型为中心。 这些系统将产生关键预测的多种模型结合起来,然后运用决定规则来作出最终决定。 为了解释这些决定,我们建议采用半模型背景解释器(SMACE),这是一种新的解释性方法,将决策规则的几何方法与现有的机器学习模型的可解释性方法结合起来,以便产生一种针对最终用户的直观特征排序。 我们表明,已经建立的模型-不可知性方法在这一背景下对表格数据产生的结果很差,特别是给予若干特点同等重视,而SMADE可以以有意义的方式对其进行排序。