Federated learning trains models across devices with distributed data, while protecting the privacy and obtaining a model similar to that of centralized ML. A large number of workers with data and computing power are the foundation of federal learning. However, the inevitable costs prevent self-interested workers from serving for free. Moreover, due to data isolation, task publishers lack effective methods to select, evaluate and pay reliable workers with high-quality data. Therefore, we design an auction-based incentive mechanism for horizontal federated learning with reputation and contribution measurement. By designing a reasonable method of measuring contribution, we establish the reputation of workers, which is easy to decline and difficult to improve. Through reverse auctions, workers bid for tasks, and the task publisher selects workers combining reputation and bid price. With the budget constraint, winning workers are paid based on performance. We proved that our mechanism satisfies the individual rationality of the honest worker, budget feasibility, truthfulness, and computational efficiency.
翻译:联邦学习联盟通过分布数据跨设备培训模式,同时保护隐私,并获得类似于中央ML的模型。许多拥有数据和计算能力的工人是联邦学习的基础。然而,不可避免的成本使得自利工人无法免费服务。此外,由于数据孤立,任务出版商缺乏以高质量数据选择、评价和支付可靠工人的有效方法。因此,我们设计了一个以拍卖为基础的奖励机制,以具有声誉和贡献度的横向联合学习。我们设计了一个衡量贡献的合理方法,从而建立了工人的声誉,这种声誉很容易下降,难以改善。通过逆向拍卖、工人为任务竞标和任务出版商选择了将名声和投标价格相结合的工人。由于预算限制,赢取的工人是根据业绩支付的。我们证明我们的机制符合诚实工人的个人合理性、预算可行性、真实性和计算效率。