Homomorphic encryption is a very useful gradient protection technique used in privacy preserving federated learning. However, existing encrypted federated learning systems need a trusted third party to generate and distribute key pairs to connected participants, making them unsuited for federated learning and vulnerable to security risks. Moreover, encrypting all model parameters is computationally intensive, especially for large machine learning models such as deep neural networks. In order to mitigate these issues, we develop a practical, computationally efficient encryption based protocol for federated deep learning, where the key pairs are collaboratively generated without the help of a third party. By quantization of the model parameters on the clients and an approximated aggregation on the server, the proposed method avoids encryption and decryption of the entire model. In addition, a threshold based secret sharing technique is designed so that no one can hold the global private key for decryption, while aggregated ciphertexts can be successfully decrypted by a threshold number of clients even if some clients are offline. Our experimental results confirm that the proposed method significantly reduces the communication costs and computational complexity compared to existing encrypted federated learning without compromising the performance and security.
翻译:然而,现有的加密联合学习系统需要信任的第三方来生成和向连结参与者分发关键对配,从而使这些配对不适合联结学习,并容易发生安全风险。此外,加密所有模型参数是计算密集的,特别是对于深神经网络等大型机器学习模型而言。为了减轻这些问题,我们为友联深层学习开发了一个实用的、基于计算效率的加密协议,即使有些客户不在线外,关键对对口是在没有第三方帮助的情况下协同生成的。通过对客户的模型参数进行量化和在服务器上大致汇总,拟议的方法避免了整个模型的加密和破解。此外,基于门槛的秘密共享技术的设计使任何人都不能持有全球私人钥匙进行解密,而综合的密码可以成功地被临界数的客户解密,即使有些客户不在线。我们的实验结果证实,拟议的方法大大降低了通信成本和计算复杂性,与现有的加密安全性学习相比,没有牺牲性能。