Privacy and Byzantine resilience are two indispensable requirements for a federated learning (FL) system. Although there have been extensive studies on privacy and Byzantine security in their own track, solutions that consider both remain sparse. This is due to difficulties in reconciling privacy-preserving and Byzantine-resilient algorithms. In this work, we propose a solution to such a two-fold issue. We use our version of differentially private stochastic gradient descent (DP-SGD) algorithm to preserve privacy and then apply our Byzantine-resilient algorithms. We note that while existing works follow this general approach, an in-depth analysis on the interplay between DP and Byzantine resilience has been ignored, leading to unsatisfactory performance. Specifically, for the random noise introduced by DP, previous works strive to reduce its impact on the Byzantine aggregation. In contrast, we leverage the random noise to construct an aggregation that effectively rejects many existing Byzantine attacks. We provide both theoretical proof and empirical experiments to show our protocol is effective: retaining high accuracy while preserving the DP guarantee and Byzantine resilience. Compared with the previous work, our protocol 1) achieves significantly higher accuracy even in a high privacy regime; 2) works well even when up to 90% of distributive workers are Byzantine.
翻译:隐私和拜占庭容错是联邦学习系统中不可或缺的要求。尽管隐私和拜占庭安全已经有了广泛的研究,但同时考虑两个要求的解决方案仍然很少。这是因为在保护隐私和拜占庭强度算法之间协调困难。在这项工作中,我们提出了解决这个双重问题的方法。我们使用差分隐私随机梯度下降(DP-SGD)算法进行隐私保护,然后应用我们的拜占庭容错算法。我们注意到,虽然现有的工作都遵循这一总体方法,但没有对DP和拜占庭容错之间的相互作用进行深入分析,导致了不令人满意的性能。具体来说,对于DP引入的随机噪声,先前的工作努力减少其在拜占庭聚合中的影响。相反,我们利用随机噪声构造一个聚合,有效地拒绝了许多现有的拜占庭攻击。我们提供了理论证明和实证实验,以展示我们的协议的有效性:在保持DP保证和拜占庭容错性的同时保持高精度。与以前的工作相比,我们的协议具有以下特点:1)即使在高隐私条件下,也实现了显着更高的准确性;2)即使有多达90%的分布式工人处于拜占庭状态,仍能很好地工作。