Federated learning (FL) is an emerging paradigm for machine learning, in which data owners can collaboratively train a model by sharing gradients instead of their raw data. Two fundamental research problems in FL are incentive mechanism and privacy protection. The former focuses on how to incentivize data owners to participate in FL. The latter studies how to protect data owners' privacy while maintaining high utility of trained models. However, incentive mechanism and privacy protection in FL have been studied separately and no work solves both problems at the same time. In this work, we address the two problems simultaneously by an FL-Market that incentivizes data owners' participation by providing appropriate payments and privacy protection. FL-Market enables data owners to obtain compensation according to their privacy loss quantified by local differential privacy (LDP). Our insight is that, by meeting data owners' personalized privacy preferences and providing appropriate payments, we can (1) incentivize privacy risk-tolerant data owners to set larger privacy parameters (i.e., gradients with less noise) and (2) provide preferred privacy protection for privacy risk-averse data owners. To achieve this, we design a personalized LDP-based FL framework with a deep learning-empowered auction mechanism for incentivizing trading gradients with less noise and optimal aggregation mechanisms for model updates. Our experiments verify the effectiveness of the proposed framework and mechanisms.
翻译:联邦学习(FL)是机械学习的新兴范例,数据所有者可以通过分享梯度而不是原始数据来合作培训模型,其中数据所有者可以通过共享梯度而不是其原始数据来共同培训模型。FL的两个根本性研究问题是奖励机制和隐私保护。前者侧重于如何激励数据所有者参加FL。后者研究如何保护数据所有者隐私,同时保持训练有素的模型的高度效用。然而,对FL的激励机制和隐私保护机制进行了单独研究,没有工作同时解决这两个问题。在这项工作中,我们通过一个鼓励数据所有者通过提供适当的付款和隐私保护来鼓励数据所有者参与的FL标记,同时解决了两个问题。FL市场使数据所有者能够根据当地差异隐私权(LDP)量化的隐私损失获得补偿。我们的见解是,通过满足数据所有者的个人隐私偏好,提供适当的付款,我们可以(1)鼓励隐私风险容忍数据所有者制定更大的隐私参数(即梯度较少噪音的梯度)和(2)为隐私风险数据所有者提供首选的隐私保护,通过提供适当的付款和隐私风险保护。为了实现这一点,我们设计了一个以个人化的更高程度的LDP升级的拍卖机制,我们设计了一个个人化的升级的升级的升级的实验室数据库。