Trust and credibility in machine learning models is bolstered by the ability of a model to explain itsdecisions. While explainability of deep learning models is a well-known challenge, a further chal-lenge is clarity of the explanation itself, which must be interpreted by downstream users. Layer-wiseRelevance Propagation (LRP), an established explainability technique developed for deep models incomputer vision, provides intuitive human-readable heat maps of input images. We present the novelapplication of LRP for the first time with structured datasets using a deep neural network (1D-CNN),for Credit Card Fraud detection and Telecom Customer Churn prediction datasets. We show how LRPis more effective than traditional explainability concepts of Local Interpretable Model-agnostic Ex-planations (LIME) and Shapley Additive Explanations (SHAP) for explainability. This effectivenessis both local to a sample level and holistic over the whole testing set. We also discuss the significantcomputational time advantage of LRP (1-2s) over LIME (22s) and SHAP (108s), and thus its poten-tial for real time application scenarios. In addition, our validation of LRP has highlighted features forenhancing model performance, thus opening up a new area of research of using XAI as an approachfor feature subset selection
翻译:机器学习模型的可信度和信任因模型解释其决定的能力而得到加强。虽然深层次学习模型的解释性是一个众所周知的挑战,但进一步的挑战是解释本身的明确性,下游用户必须对此作出解释。 层与层相近的推广(LRP)是为计算机视觉深层模型开发的既定解释性技术,提供了输入图像的直观人类可读热图。我们首次用结构化数据集展示了LRP的新应用,使用了深层神经网络(1D-CNN)、信用卡欺诈探测和电信客户Churn预测数据集。我们展示了LRP如何比本地互可读模型外推(LIME)和Spley Additive解释(SHAP)的传统解释性概念更有效性,从而将LRP(1-2s)和SHAP(108s)的功能作为真正的神经模型,从而将SHAP(SHA)的新的测试性能提升了我们的系统选择性模型,从而将SHAP(X)的模型的新的测试性化,从而将SARIP(V)的特性作为一种不断升级的模型的应用。