Measuring contributions is a classical problem in cooperative game theory where the Shapley value is the most well-known solution concept. In this paper, we establish the convergence property of the Shapley value in parametric Bayesian learning games where players perform a Bayesian inference using their combined data, and the posterior-prior KL divergence is used as the characteristic function. We show that for any two players, under some regularity conditions, their difference in Shapley value converges in probability to the difference in Shapley value of a limiting game whose characteristic function is proportional to the log-determinant of the joint Fisher information. As an application, we present an online collaborative learning framework that is asymptotically Shapley-fair. Our result enables this to be achieved without any costly computations of posterior-prior KL divergences. Only a consistent estimator of the Fisher information is needed. The framework's effectiveness is demonstrated with experiments using real-world data.
翻译:测量贡献是合作游戏理论的一个典型问题, Shapley 值是最为著名的解决方案概念。 在本文中, 我们确定了参数贝叶西亚学习游戏中Shapley 值的趋同属性。 在这种游戏中, 玩家使用其综合数据进行贝叶斯推断, 而后端- Prior KL 差异被用作特性函数。 我们显示, 对于任何两个玩家来说, 在某些常规条件下, 其在 Shapley 值中的差异都与 Shapley 值的差值相近, 其特性功能与联合渔业信息的日志决定成正比。 作为应用程序, 我们提出了一个在线合作学习框架, 其作用是无差别的。 我们的结果使得在不以昂贵的计算后端- Prior KL 差异的情况下实现这一点。 只需要一个一致的渔业信息估计符。 框架的有效性通过使用真实世界数据进行实验来证明。