As modern complex neural networks keep breaking records and solving harder problems, their predictions also become less and less intelligible. The current lack of interpretability often undermines the deployment of accurate machine learning tools in sensitive settings. In this work, we present a model-agnostic explanation method for image classification based on a hierarchical extension of Shapley coefficients--Hierarchical Shap (h-Shap)--that resolves some of the limitations of current approaches. Unlike other Shapley-based explanation methods, h-Shap is scalable and can be computed without the need of approximation. Under certain distributional assumptions, such as those common in multiple instance learning, h-Shap retrieves the exact Shapley coefficients with an exponential improvement in computational complexity. We compare our hierarchical approach with popular Shapley-based and non-Shapley-based methods on a synthetic dataset, a medical imaging scenario, and a general computer vision problem, showing that h-Shap outperforms the state of the art in both accuracy and runtime. Code and experiments are made publicly available.
翻译:由于现代复杂的神经网络不断打破记录和解决更棘手的问题,它们的预测也越来越不易理解,目前缺乏解释性往往破坏在敏感环境中部署准确的机器学习工具。在这项工作中,我们提出了一个基于沙普利系数-高压沙普(h-Shap)的等级延伸图像分类的模型-不可知的解释方法,该方法解决了当前方法的某些局限性。与其他基于沙普利的解释方法不同,h-Shap是可伸缩的,可以计算而无需近似。根据某些分布性假设,例如多例学习中常见的假设,h-Shap检索精确的沙普系数,其计算复杂性得到迅速改善。我们将我们的等级方法与合成数据集上流行的沙普利系数和非沙普利系数方法、医学成像情景以及一般计算机视觉问题进行比较,表明h-Shap在精确和运行时间上都超过了艺术状态。代码和实验被公开提供。