Privacy-preserving federated learning allows multiple users to jointly train a model with coordination of a central server. The server only learns the final aggregation result, thus the users' (private) training data is not leaked from the individual model updates. However, keeping the individual updates private allows malicious users to perform Byzantine attacks and degrade the accuracy without being detected. Best existing defenses against Byzantine workers rely on robust rank-based statistics, e.g., median, to find malicious updates. However, implementing privacy-preserving rank-based statistics is nontrivial and not scalable in the secure domain, as it requires sorting all individual updates. We establish the first private robustness check that uses high break point rank-based statistics on aggregated model updates. By exploiting randomized clustering, we significantly improve the scalability of our defense without compromising privacy. We leverage our statistical bounds in zero-knowledge proofs to detect and remove malicious updates without revealing the private user updates. Our novel framework, zPROBE, enables Byzantine resilient and secure federated learning. Empirical evaluations demonstrate that zPROBE provides a low overhead solution to defend against state-of-the-art Byzantine attacks while preserving privacy.
翻译:保护隐私联盟学习让多个用户能够通过中央服务器的协调联合培训模型。 服务器只能学习最后汇总结果, 用户( 私人) 培训数据不会从单个模型更新中泄露。 但是, 保持个人更新允许恶意用户进行拜占庭袭击, 并降低准确性, 而不被检测到。 针对拜占庭工人的最佳现有防御手段依赖于强有力的基于等级的统计数据, 如中位数, 以找到恶意更新信息。 然而, 实施基于保密的按级统计在安全域是不可进取的, 并且无法推广, 因为它需要对全部个人更新进行排序 。 我们建立首个私人稳健性检查, 在汇总模型更新时使用高断点按级进行的统计数据 。 通过随机集成, 我们大大改进了我们防御的可扩展性, 同时又不损害隐私。 我们利用零知识证据中的统计界限来检测和消除恶意更新信息, 而不泄露私人用户更新信息。 我们的新框架, zPROBE, 使得BZzzzzzzzant 能够让Ban- federate lead 学习 Best 。