Federated Learning (FL) is a Machine Learning (ML) technique that aims to reduce the threats to user data privacy. Training is done using the raw data on the users' device, called clients, and only the training results, called gradients, are sent to the server to be aggregated and generate an updated model. However, we cannot assume that the server can be trusted with private information, such as metadata related to the owner or source of the data. So, hiding the client information from the server helps reduce privacy-related attacks. Therefore, the privacy of the client's identity, along with the privacy of the client's data, is necessary to make such attacks more difficult. This paper proposes an efficient and privacy-preserving protocol for FL based on group signature. A new group signature for federated learning, called GSFL, is designed to not only protect the privacy of the client's data and identity but also significantly reduce the computation and communication costs considering the iterative process of federated learning. We show that GSFL outperforms existing approaches in terms of computation, communication, and signaling costs. Also, we show that the proposed protocol can handle various security attacks in the federated learning environment.
翻译:联邦学习(FL)是一种机器学习(ML)技术,旨在减少对用户数据隐私的威胁。培训使用用户设备(称为客户)的原始数据进行,只有培训结果(称为梯度)被发送到服务器,以便汇总并产生更新模型。然而,我们不能假定服务器可以信任私人信息,例如与数据所有人或数据来源有关的元数据。因此,将客户信息隐藏在服务器上有助于减少与隐私有关的攻击。因此,客户身份的隐私以及客户数据隐私是使这类攻击更加困难的必要条件。本文提议基于团体签名为FL提供高效和隐私保护协议。Federate学习的新集体签名(称为GSFLL)不仅旨在保护客户数据和身份的隐私,而且大大减少计算和通信费用,同时考虑Federate学习的迭接过程。我们表明,GSFLLL在计算、通信和信号费用方面超越了现有的方法。我们还表明,拟议的协议可以处理联邦学习环境中的各种安全攻击。