Federated learning (FL) is a distributed machine learning paradigm that enables multiple clients to collaboratively train a shared model without disclosing their local data. To address privacy issues of gradient, several privacy-preserving machine-learning schemes based on multi-client functional encryption (MCFE) have been proposed. However, existing MCFE-based schemes cannot support client dropout or flexible threshold selection, which are essential for practical FL. In this paper, we design a flexible threshold multi-client functional encryption for inner product (FTMCFE-IP) scheme, where multiple clients generate ciphertexts independently without any interaction. In the encryption phase, clients are able to choose a threshold flexibly without reinitializing the system. The decryption can be performed correctly when the number of online clients satisfies the threshold. An authorized user are allowed to compute the inner product of the vectors associated with his/her functional key and the ciphertext, respectively, but cannot learning anything else. Especially, the presented scheme supports clients drop out. Furthermore, we provide the definition and security model of our FTMCFE-IP scheme,and propose a concrete construction. The security of the designed scheme is formally proven. Finally, we implement and evaluate our FTMCFE-IP scheme.
翻译:联邦学习(FL)是一种分布式机器学习范式,允许多个客户端在不公开本地数据的情况下协作训练共享模型。为解决梯度的隐私问题,已有若干基于多客户端功能加密(MCFE)的隐私保护机器学习方案被提出。然而,现有基于MCFE的方案无法支持客户端退出或灵活阈值选择,而这对于实际联邦学习应用至关重要。本文设计了一种面向内积的灵活阈值多客户端功能加密(FTMCFE-IP)方案,其中多个客户端无需任何交互即可独立生成密文。在加密阶段,客户端能够灵活选择阈值而无需重新初始化系统。当在线客户端数量满足阈值时,即可正确执行解密。授权用户可计算与其功能密钥相关联的向量和密文之间的内积,但无法获取任何其他信息。特别地,所提方案支持客户端退出。此外,我们给出了FTMCFE-IP方案的形式化定义与安全模型,并提出了具体构造。所设计方案的安全性得到了形式化证明。最后,我们实现并评估了所提出的FTMCFE-IP方案。