Federated Learning is an emerging distributed collaborative learning paradigm used by many of applications nowadays. The effectiveness of federated learning relies on clients' collective efforts and their willingness to contribute local data. However, due to privacy concerns and the costs of data collection and model training, clients may not always contribute all the data they possess, which would negatively affect the performance of the global model. This paper presents an incentive mechanism that encourages clients to contribute as much data as they can obtain. Unlike previous incentive mechanisms, our approach does not monetize data. Instead, we implicitly use model performance as a reward, i.e., significant contributors are paid off with better models. We theoretically prove that clients will use as much data as they can possibly possess to participate in federated learning under certain conditions with our incentive mechanism
翻译:联邦学习联盟是当今许多应用软件使用的一种新出现的分布式合作学习模式,联合会学习的有效性取决于客户的集体努力和他们提供当地数据的意愿。然而,由于隐私问题以及数据收集和模型培训的成本,客户可能并不总是贡献他们拥有的所有数据,这将对全球模式的绩效产生不利影响。本文提供了一个激励机制,鼓励客户尽可能多地贡献数据。与以往的激励机制不同,我们的方法并不将数据货币化。相反,我们隐含地使用模型业绩作为奖励,即用更好的模式向重要贡献者支付报酬。我们理论上证明,客户将尽可能使用尽可能多的数据,在一定条件下以我们的激励机制参与联合会学习。