Nowadays, the interpretation of why a machine learning (ML) model makes certain inferences is as crucial as the accuracy of such inferences. Some ML models like the decision tree possess inherent interpretability that can be directly comprehended by humans. Others like artificial neural networks (ANN), however, rely on external methods to uncover the deduction mechanism. SHapley Additive exPlanations (SHAP) is one of such external methods, which requires a background dataset when interpreting ANNs. Generally, a background dataset consists of instances randomly sampled from the training dataset. However, the sampling size and its effect on SHAP remain to be unexplored. In our empirical study on the MIMIC-III dataset, we show that the two core explanations - SHAP values and variable rankings fluctuate when using different background datasets acquired from random sampling, indicating that users cannot unquestioningly trust the one-shot interpretation from SHAP. Luckily, such fluctuation decreases with the increase of the background dataset size. Also, we notice an U-shape in the stability assessment of SHAP variable rankings, demonstrating that SHAP is more reliable in ranking the most and least important variables compared to moderately important ones. Overall, our results suggest that users should take into account how background data affects SHAP results, with improved SHAP stability as the background sample size increases.
翻译:目前,对机器学习模型为何得出某些推论的解释与此类推论的准确性一样至关重要。 一些像决策树这样的ML模型具有人类可以直接理解的内在可解释性。 然而,其他模型,如人工神经网络(ANN),则依靠外部方法来发现推算机制。 Shapley Additive Explanation(SHAP)是这种外部方法之一,它要求解释非正常身份时使用背景数据集。 一般来说,背景数据集包含从培训数据集中随机抽样的事例。 然而,抽样规模及其对SHAP的效应仍有待探索。 在我们关于MIMI-III数据集的经验研究中,我们显示,在使用从随机抽样中获得的不同背景数据集时,两种核心解释—— SHAP值和变量排名会波动,表明用户不能毫不怀疑地相信SHAP的一幅解释。 幸运的是,随着背景数据集大小的增加,这种波动会减少。 另外,我们注意到,在SHAP的稳定性背景评估中,对SHAP的最小的大小及其对SHAP变量排序的排序中,应该以最可靠的方式将SHAP的排序作为中比较重要的背景结果。