Given the pressing need for assuring algorithmic transparency, Explainable AI (XAI) has emerged as one of the key areas of AI research. In this paper, we develop a novel Bayesian extension to the LIME framework, one of the most widely used approaches in XAI -- which we call BayLIME. Compared to LIME, BayLIME exploits prior knowledge and Bayesian reasoning to improve both the consistency in repeated explanations of a single prediction and the robustness to kernel settings. BayLIME also exhibits better explanation fidelity than the state-of-the-art (LIME, SHAP and GradCAM) by its ability to integrate prior knowledge from, e.g., a variety of other XAI techniques, as well as verification and validation (V&V) methods. We demonstrate the desirable properties of BayLIME through both theoretical analysis and extensive experiments.
翻译:鉴于迫切需要确保算法透明度,可解释的AI(XAI)已成为AI研究的关键领域之一。在本文中,我们开发了一部关于LIME框架的新型Bayesian扩展版,这是XAI中最广泛使用的方法之一 -- -- 我们称之为BayLIME。与LIME相比,BayLIME利用了先前的知识以及Bayesian推理来改进对单一预测的反复解释的一致性和对内核环境的坚固性。BayLIME通过理论分析和广泛的实验,展示了BayLIME的可取性。