Explainable Artificial Intelligence (XAI) provides tools to help understanding how the machine learning models work and reach a specific outcome. It helps to increase the interpretability of models and makes the models more trustworthy and transparent. In this context, many XAI methods were proposed being SHAP and LIME the most popular. However, the proposed methods assume that used predictors in the machine learning models are independent which in general is not necessarily true. Such assumption casts shadows on the robustness of the XAI outcomes such as the list of informative predictors. Here, we propose a simple, yet useful proxy that modifies the outcome of any XAI feature ranking method allowing to account for the dependency among the predictors. The proposed approach has the advantage of being model-agnostic as well as simple to calculate the impact of each predictor in the model in presence of collinearity.
翻译:可解释人工智能(XAI)提供了帮助理解机器学习模型如何工作和实现特定结果的工具。它有助于增加模型的可解释性,使模型更加值得信赖和透明。在这种情况下,提出了许多XAI方法,其中SHAP和LIME最受欢迎。然而,这些方法假设机器学习模型中使用的预测变量是独立的,而这通常并不一定正确。这种假设给XAI结果(例如信息化预测变量列表)的鲁棒性留下了阴影。在这里,我们提出了一个简单但有用的代理方法,它修改任何XAI特征排序方法的结果,使其能够考虑预测变量之间的依赖性。所提出的方法具有模型无关性和简单性,可以计算出协线性情况下每个预测变量对模型的影响。