Game-theoretic attribution techniques based on Shapley values are used extensively to interpret black-box machine learning models, but their exact calculation is generally NP-hard, requiring approximation methods for non-trivial models. As the computation of Shapley values can be expressed as a summation over a set of permutations, a common approach is to sample a subset of these permutations for approximation. Unfortunately, standard Monte Carlo sampling methods can exhibit slow convergence, and more sophisticated quasi Monte Carlo methods are not well defined on the space of permutations. To address this, we investigate new approaches based on two classes of approximation methods and compare them empirically. First, we demonstrate quadrature techniques in a RKHS containing functions of permutations, using the Mallows kernel to obtain explicit convergence rates of $O(1/n)$, improving on $O(1/\sqrt{n})$ for plain Monte Carlo. The RKHS perspective also leads to quasi Monte Carlo type error bounds, with a tractable discrepancy measure defined on permutations. Second, we exploit connections between the hypersphere $\mathbb{S}^{d-2}$ and permutations to create practical algorithms for generating permutation samples with good properties. Experiments show the above techniques provide significant improvements for Shapley value estimates over existing methods, converging to a smaller RMSE in the same number of model evaluations.
翻译:以 Shapley 值 为基础的游戏理论归因技术被广泛用于解释黑盒机器学习模型, 但精确的计算一般是 NP- 硬, 需要非三角模型的近似方法。 由于 Shaple 值的计算可以表示为一组对一组变相的加和, 一种共同的方法是抽样这些变异的子子。 不幸的是, 标准的 Monte Carlo 取样方法可能显示缓慢的趋同, 而更复杂的准 Monte Carlo 方法在变异空间上也没有很好地界定。 为了解决这个问题, 我们根据两种近似方法来调查新的方法, 并用经验来比较这些方法。 首先, 我们在包含变异函数函数功能的 RKHS 中演示四等技术。 使用 Mallows 内核以获得$( 1/ n) 的明确趋和 $( ) 美元/ sqrt{n} 的合并率。 用于在平面上生成大量变现的变现算法 。