In recent years, federated learning has been embraced as an approach for bringing about collaboration across large populations of learning agents. However, little is known about how collaboration protocols should take agents' incentives into account when allocating individual resources for communal learning in order to maintain such collaborations. Inspired by game theoretic notions, this paper introduces a framework for incentive-aware learning and data sharing in federated learning. Our stable and envy-free equilibria capture notions of collaboration in the presence of agents interested in meeting their learning objectives while keeping their own sample collection burden low. For example, in an envy-free equilibrium, no agent would wish to swap their sampling burden with any other agent and in a stable equilibrium, no agent would wish to unilaterally reduce their sampling burden. In addition to formalizing this framework, our contributions include characterizing the structural properties of such equilibria, proving when they exist, and showing how they can be computed. Furthermore, we compare the sample complexity of incentive-aware collaboration with that of optimal collaboration when one ignores agents' incentives.
翻译:近些年来,联合会学习被接受为在大量学习代理人之间实现协作的一种方法,然而,对于合作协议在为社区学习分配个人资源以维持这种协作时如何考虑代理人的奖励办法却知之甚少。在游戏理论概念的启发下,本文件引入了一个奖励意识学习和在联合会学习中分享数据的框架。我们稳定和不羡慕的平衡概念,当有有兴趣实现其学习目标的代理人在场时,在保持其自己的抽样收集负担低的同时,可以捕捉协作概念。例如,在无嫉妒的平衡中,没有任何代理人愿意在为社区学习分配个人资源时考虑代理人的奖励办法,而在稳定的平衡中,没有任何代理人愿意单方面减少其抽样负担。除了将这一框架正规化外,我们的贡献还包括将这种平衡的结构特性定性,证明它们的存在,并表明它们是如何计算出来的。此外,我们将奖励意识合作的抽样复杂性与当忽视代理人的奖励措施时的最佳合作方式相比较。