Federated Learning (FL) allows multiple participating clients to train machine learning models collaboratively by keeping their datasets local and only exchanging model updates. Existing FL protocol designs have been shown to be vulnerable to attacks that aim to compromise data privacy and/or model robustness. Recently proposed defenses focused on ensuring either privacy or robustness, but not both. In this paper, we develop a framework called PRECAD, which simultaneously achieves differential privacy (DP) and enhances robustness against model poisoning attacks with the help of cryptography. Using secure multi-party computation (MPC) techniques (e.g., secret sharing), noise is added to the model updates by the honest-but-curious server(s) (instead of each client) without revealing clients' inputs, which achieves the benefit of centralized DP in terms of providing a better privacy-utility tradeoff than local DP based solutions. Meanwhile, a crypto-aided secure validation protocol is designed to verify that the contribution of model update from each client is bounded without leaking privacy. We show analytically that the noise added to ensure DP also provides enhanced robustness against malicious model submissions. We experimentally demonstrate that our PRECAD framework achieves higher privacy-utility tradeoff and enhances robustness for the trained models.
翻译:联邦学习(FL) 允许多个参与客户通过保持本地数据集和唯一的交换模式更新来合作培训机器学习模式; 现有FL协议设计显示很容易受到旨在损害数据隐私和/或模型稳健性的攻击; 最近提出的辩护侧重于确保隐私或稳健性,而不是两者兼而有之; 在本文件中,我们开发了一个称为PRECAD的框架,这个框架同时实现不同的隐私(DP),并在加密法的帮助下加强抵御模式中毒袭击的力度; 使用安全的多方计算技术(例如秘密共享), 现有的FL协议设计在模型更新中增加了噪音,而诚实但有保证的服务器(而不是每个客户)没有披露客户的投入,从而在提供比基于本地DP的解决方案更好的隐私使用率交易方面实现了集中化的好处; 同时,我们设计了一个加密的安全验证协议,以核实每个客户的模型更新的贡献没有泄露隐私。 我们从分析中显示,增加噪音以确保DP的更高性也提供了抵御恶意隐私权的强化模型。