Federated learning enables multiple, distributed participants (potentially on different clouds) to collaborate and train machine/deep learning models by sharing parameters/gradients. However, sharing gradients, instead of centralizing data, may not be as private as one would expect. Reverse engineering attacks on plaintext gradients have been demonstrated to be practically feasible. Existing solutions for differentially private federated learning, while promising, lead to less accurate models and require nontrivial hyperparameter tuning. In this paper, we examine the use of additive homomorphic encryption (specifically the Paillier cipher) to design secure federated gradient descent techniques that (i) do not require addition of statistical noise or hyperparameter tuning, (ii) does not alter the final accuracy or utility of the final model, (iii) ensure that the plaintext model parameters/gradients of a participant are never revealed to any other participant or third party coordinator involved in the federated learning job, (iv) minimize the trust placed in any third party coordinator and (v) are efficient, with minimal overhead, and cost effective.
翻译:联邦学习使多种分布式参与者(可能在不同云层上)通过共享参数/梯度来协作和训练机器/深层学习模式(可能在不同云层上),但分享梯度,而不是集中数据,可能不如人们预期的那样是私人的。对纯文本梯度的反向工程攻击已证明是实际可行的。区别式私人联合学习的现有解决办法虽然有希望,但会导致不那么精确的模式,需要非三角性超参数的调试。在本文件中,我们审查了添加式同质加密(特别是Paillier 密码)的使用情况,以设计安全的联盟式梯度脱底技术,(i) 不要求增加统计噪音或超分光仪调,(ii) 不改变最后模型的最终准确度或效用,(iii) 确保参与人的纯文本模型参数/梯度从未向参与联邦学习工作的任何其他参与者或第三方协调员透露,(iv)尽量减少对任何第三方协调员的信任,以及(v)高效、最低间接费用和成本效益。