The most popular methods for measuring importance of the variables in a black box prediction algorithm make use of synthetic inputs that combine predictor variables from multiple subjects. These inputs can be unlikely, physically impossible, or even logically impossible. As a result, the predictions for such cases can be based on data very unlike any the black box was trained on. We think that users cannot trust an explanation of the decision of a prediction algorithm when the explanation uses such values. Instead we advocate a method called Cohort Shapley that is grounded in economic game theory and unlike most other game theoretic methods, it uses only actually observed data to quantify variable importance. Cohort Shapley works by narrowing the cohort of subjects judged to be similar to a target subject on one or more features. A feature is important if using it to narrow the cohort makes a large difference to the cohort mean. We illustrate it on an algorithmic fairness problem where it is essential to attribute importance to protected variables that the model was not trained on. For every subject and every predictor variable, we can compute the importance of that predictor to the subject's predicted response or to their actual response. These values can be aggregated, for example over all Black subjects, and we propose a Bayesian bootstrap to quantify uncertainty in both individual and aggregate Shapley values.
翻译:在黑盒预测算法中,衡量变量重要性的最流行的方法是使用合成投入,将多个主题的预测变量结合起来。这些投入可能是不太可能的,实际上不可能,甚至逻辑上不可能。因此,对此类案例的预测可以基于数据,这与任何黑盒所训练的黑盒不同。我们认为,当解释使用这些数值时,用户不能相信对预测算法决定的解释。相反,我们提倡一种基于经济游戏理论和不同于大多数其它游戏理论方法的所谓Cohort Shapley方法,它只使用实际观测到的数据来量化变量重要性。Cohort Shaply通过缩小被认为与一个或多个特征目标相似的主体组群来开展工作。如果使用它来缩小组群使其与组合的平均值大有不同,则一个特点很重要。我们用它来说明一种算法公平问题,即必须赋予模型所没有培训的受保护变量的重要性。关于每个主题和每个预测变量,我们只能用实际观察到的数据来量化变量的重要性。Choly Shaply 工作,我们可以将所有Breas 和Cregests 都进行量化。