Interpretable machine learning offers insights into what factors drive a certain prediction of a black-box system. A large number of interpreting methods focus on identifying explanatory input features, which generally fall into two main categories: attribution and selection. A popular attribution-based approach is to exploit local neighborhoods for learning instance-specific explainers in an additive manner. The process is thus inefficient and susceptible to poorly-conditioned samples. Meanwhile, many selection-based methods directly optimize local feature distributions in an instance-wise training framework, thereby being capable of leveraging global information from other inputs. However, they can only interpret single-class predictions and many suffer from inconsistency across different settings, due to a strict reliance on a pre-defined number of features selected. This work exploits the strengths of both methods and proposes a framework for learning local explanations simultaneously for multiple target classes. Our model explainer significantly outperforms additive and instance-wise counterparts on faithfulness with more compact and comprehensible explanations. We also demonstrate the capacity to select stable and important features through extensive experiments on various data sets and black-box model architectures.
翻译:可解释的机器学习有助于深入了解哪些因素促使对黑盒系统作出某种预测。许多解释方法侧重于确定解释性输入特征,通常分为两大类:归属和选择。流行的归因法方法是以添加方式利用当地社区学习具体实例的解释者。因此,这一过程效率低下,容易出现条件差的样本。与此同时,许多基于选择的方法直接优化了实例化培训框架中的本地特征分布,从而能够利用其他投入提供的全球信息。然而,它们只能解释单级预测,而且由于严格依赖预定的选定特征数量,许多不同环境都存在不一致之处。这项工作利用了这两种方法的优势,并为多个目标类同时学习当地解释提出了框架。我们的模型解释明显超越了以更为紧凑和理解性解释的忠实程度的添加和实例化对应方。我们还展示了通过对各种数据集和黑盒模型结构的广泛实验选择稳定和重要特征的能力。