Measuring contributions is a classical problem in cooperative game theory where the Shapley value is the most well-known solution concept. In this paper, we establish the convergence property of the Shapley value in parametric Bayesian learning games where players perform a Bayesian inference using their combined data, and the posterior-prior KL divergence is used as the characteristic function. We show that for any two players, under some regularity conditions, their difference in Shapley value converges in probability to the difference in Shapley value of a limiting game whose characteristic function is proportional to the log-determinant of the joint Fisher information. As an application, we present an online collaborative learning framework that is asymptotically Shapley-fair. Our result enables this to be achieved without any costly computations of posterior-prior KL divergences. Only a consistent estimator of the Fisher information is needed. The effectiveness of our framework is demonstrated with experiments using real-world data.
翻译:测量贡献是合作游戏理论的一个典型问题, Shapley 值是最为著名的解决方案概念。 在本文中, 我们确定了参数贝叶西亚学习游戏中Shapley 值的趋同属性。 在这种游戏中, 玩家使用其综合数据进行贝叶斯推断, 而后位- Prior KL 差异被作为特征功能。 我们显示, 对于任何两个玩家, 在某些常规条件下, 其在 Shapley 值中的差异, 都有可能与 Shapley 值的差值相趋同, 后者的特性功能与联合渔业信息的日志确定成成比例。 作为应用程序, 我们提出了一个在线合作学习框架, 这是一种无损于光谱的光谱。 我们的结果使得在不以任何昂贵的计算远端- Prior KL 差异的情况下实现这一点。 只需要一个一致的渔业信息估计符。 我们的框架的有效性通过使用真实世界数据进行实验来证明。