As black box explanations are increasingly being employed to establish model credibility in high-stakes settings, it is important to ensure that these explanations are accurate and reliable. However, prior work demonstrates that explanations generated by state-of-the-art techniques are inconsistent, unstable, and provide very little insight into their correctness and reliability. In addition, these methods are also computationally inefficient, and require significant hyper-parameter tuning. In this paper, we address the aforementioned challenges by developing a novel Bayesian framework for generating local explanations along with their associated uncertainty. We instantiate this framework to obtain Bayesian versions of LIME and KernelSHAP which output credible intervals for the feature importances, capturing the associated uncertainty. The resulting explanations not only enable us to make concrete inferences about their quality (e.g., there is a 95% chance that the feature importance lies within the given range), but are also highly consistent and stable. We carry out a detailed theoretical analysis that leverages the aforementioned uncertainty to estimate how many perturbations to sample, and how to sample for faster convergence. This work makes the first attempt at addressing several critical issues with popular explanation methods in one shot, thereby generating consistent, stable, and reliable explanations with guarantees in a computationally efficient manner. Experimental evaluation with multiple real world datasets and user studies demonstrate that the efficacy of the proposed framework.
翻译:黑盒解释正越来越多地用于在高取量环境中建立模型可信度,因此,必须确保这些解释准确可靠;然而,先前的工作表明,由最新技术产生的解释不一致、不稳定,对正确性和可靠性的洞察力很少;此外,这些方法在计算上也是效率低下,需要大量的超参数调试;在本文件中,我们通过开发一个新颖的贝叶西亚框架来应对上述挑战,以生成本地解释及其相关不确定性;我们即时利用这一框架获取LIME和KernelSHAP的巴耶西亚版本,这些版本为特征的重要性提供可信的间隔,捕捉相关的不确定性;由此产生的解释不仅使我们能够对其质量作出具体的推断(例如,有95%的机会其特征重要性在给定范围内),而且需要高度一致和稳定。我们利用上述不确定性进行详细理论分析,以估计抽样中有多少扰动,以及如何进行抽样,以便更快的趋同。这项工作首次试图解决几个关键问题,同时以最可靠的方式对用户进行可靠的解释,从而用一种精确的方法对数据进行精确的计算,从而用一种精确的方法对用户进行可靠的推算。