While Shapley Values (SV) are one of the gold standard for interpreting machine learning models, we show that they are still poorly understood, in particular in the presence of categorical variables or of variables of low importance. For instance, we show that the popular practice that consists in summing the SV of dummy variables is false as it provides wrong estimates of all the SV in the model and implies spurious interpretations. Based on the identification of null and active coalitions, and a coalitional version of the SV, we provide a correct computation and inference of important variables. Moreover, a Python library (All the experiments and simulations can be reproduced with the publicly available library \emph{Active Coalition of Variables} https://github.com/acvicml/ACV) that computes reliably conditional expectations and SV for tree-based models, is implemented and compared with state-of-the-art algorithms on toy models and real data sets.
翻译:虽然Shapley values(SV)是解释机器学习模型的黄金标准之一,但我们显示,这些模型仍然没有得到很好的理解,特别是在存在绝对变量或低重要性变量的情况下。例如,我们表明,包含对模拟变量进行合成的SV的流行做法是虚假的,因为它对模型中的所有SV都提供了错误的估计,并暗示了虚假的解释。根据对无效和活跃的联盟的确定,以及SV的联盟版本,我们提供了重要变量的正确计算和推论。此外,一个Python图书馆(所有实验和模拟都可以在可公开查阅的变量联盟图书馆(https://github.com/acvicml/ACV)复制,该图书馆将可靠的有条件的期望和树基模型的SV进行计算,并且与玩具模型和真实的数据集进行对比。