An important technique to explore a black-box machine learning (ML) model is called SHAP (SHapley Additive exPlanation). SHAP values decompose predictions into contributions of the features in a fair way. We will show that for a boosted trees model with some or all features being additively modeled, the SHAP dependence plot of such a feature corresponds to its partial dependence plot up to a vertical shift. We illustrate the result with XGBoost.
翻译:探索黑盒机器学习(ML)模式的一个重要技术是SHAP(SHapley Additive explaination) 。 SHAP(SHapley Additive explaination) 。 SHAP( SHapley Additive explansation) ) 。 SHAP( SHAP) 的数值会以公平的方式将预测分解成对地物的贡献。 我们将显示,对于具有某些或所有特征的增殖树型模型来说, SHAP(SHAP) 的依附地块相当于其部分依赖性地块, 直至垂直转变。 我们用 XGBoost 来说明结果 。