Federated learning (FL) is a privacy-preserving learning technique that enables distributed computing devices to train shared learning models across data silos collaboratively. Existing FL works mostly focus on designing advanced FL algorithms to improve the model performance. However, the economic considerations of the clients, such as fairness and incentive, are yet to be fully explored. Without such considerations, self-motivated clients may lose interest and leave the federation. To address this problem, we designed a novel incentive mechanism that involves a client selection process to remove low-quality clients and a money transfer process to ensure a fair reward distribution. Our experimental results strongly demonstrate that the proposed incentive mechanism can effectively improve the duration and fairness of the federation.
翻译:联邦学习(FL)是一种保护隐私的学习技术,它使分布式计算机设备能够合作地在数据库中培训共享学习模式,现有的FL工作主要侧重于设计先进的FL算法来改进模型性能,然而,客户的经济考虑,如公平和激励等,还有待充分探讨,没有这些考虑,出于自我动机的客户可能会失去兴趣并离开联邦。为了解决这个问题,我们设计了一个新的激励机制,涉及客户选择程序,以删除低质量客户,以及货币转移程序,以确保公平的奖励分配。我们的实验结果有力地证明,拟议的奖励机制可以有效改善联邦的期限和公平性。