Federated Learning (FL) aims to foster collaboration among a population of clients to improve the accuracy of machine learning without directly sharing local data. Although there has been rich literature on designing federated learning algorithms, most prior works implicitly assume that all clients are willing to participate in a FL scheme. In practice, clients may not benefit from joining in FL, especially in light of potential costs related to issues such as privacy and computation. In this work, we study the clients' incentives in federated learning to help the service provider design better solutions and ensure clients make better decisions. We are the first to model clients' behaviors in FL as a network effects game, where each client's benefit depends on other clients who also join the network. Using this setup we analyze the dynamics of clients' participation and characterize the equilibrium, where no client has incentives to alter their decision. Specifically, we show that dynamics in the population naturally converge to equilibrium without needing explicit interventions. Finally, we provide a cost-efficient payment scheme that incentivizes clients to reach a desired equilibrium when the initial network is empty.
翻译:联邦学习联合会(FL)的目的是促进客户群之间的合作,以提高机器学习的准确性,而不必直接分享当地数据。尽管在设计联合学习算法方面已有丰富的文献,但大多数先前的工作暗含地假定所有客户都愿意参加FL计划。实际上,客户可能不会从加入FL中受益,特别是考虑到隐私和计算等问题的潜在成本。在这项工作中,我们研究客户在联合学习方面的激励,以帮助服务提供者设计更好的解决方案,确保客户做出更好的决定。我们首先在FL模拟客户的行为,将其作为网络效果游戏,其中每个客户的利益都取决于加入网络的其他客户。我们利用这一设置来分析客户参与的动态,并描述平衡,因为没有任何客户有改变其决定的动机。具体地说,我们表明,人口的动态自然会趋于平衡,不需要明确的干预。最后,我们提供了一个成本效率高的支付计划,激励客户在初始网络空闲时达到理想的平衡。