Federated Learning is an emerging decentralized machine learning paradigm that allows a large number of clients to train a joint model without the need to share their private data. Participants instead only share ephemeral updates necessary to train the model. To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation; clients encrypt their gradient updates, and only the aggregated model is revealed to the server. Achieving this level of data protection, however, presents new challenges to the robustness of Federated Learning, i.e., the ability to tolerate failures and attacks. Unfortunately, in this setting, a malicious client can now easily exert influence on the model behavior without being detected. As Federated Learning is being deployed in practice in a range of sensitive applications, its robustness is growing in importance. In this paper, we take a step towards understanding and improving the robustness of secure Federated Learning. We start this paper with a systematic study that evaluates and analyzes existing attack vectors and discusses potential defenses and assesses their effectiveness. We then present RoFL, a secure Federated Learning system that improves robustness against malicious clients through input checks on the encrypted model updates. RoFL extends Federated Learning's secure aggregation protocol to allow expressing a variety of properties and constraints on model updates using zero-knowledge proofs. To enable RoFL to scale to typical Federated Learning settings, we introduce several ML and cryptographic optimizations specific to Federated Learning. We implement and evaluate a prototype of RoFL and show that realistic ML models can be trained in a reasonable time while improving robustness.
翻译:联邦学习联合会是一个新兴的分散式机器学习模式,让大量客户无需分享私人数据即可培训联合模式,而参与者只分享培训模式所需的短暂更新。为了确保客户更新的保密性,联邦学习联合会系统采用安全的聚合;客户加密其梯度更新,只有汇总模型才向服务器披露。实现这一水平的数据保护,对联邦学习联合会的强健性(即容忍失败和攻击的能力)提出了新的挑战。不幸的是,在这个环境中,恶意客户现在可以很容易地对模型行为施加影响而无需被检测。随着联邦学习联合会在一系列敏感应用程序中的实际部署,其稳健性正在变得越来越重要。在本文中,我们迈出一步,了解并改进安全的联邦学习联盟学习的稳健性。我们首先进行系统研究,评估并分析现有的攻击矢量,讨论潜在的防御力并评估其有效性。我们然后介绍一个稳定的联邦学习联合会系统,通过对加密模型的输入检查,提高恶意客户的稳健性影响。