We propose Fed-DPRoC, a novel federated learning framework designed to jointly provide differential privacy (DP), Byzantine robustness, and communication efficiency. Central to our approach is the concept of robust-compatible compression, which allows reducing the bi-directional communication overhead without undermining the robustness of the aggregation. We instantiate our framework as RobAJoL, which integrates the Johnson-Lindenstrauss (JL)-based compression mechanism with robust averaging for robustness. Our theoretical analysis establishes the compatibility of JL transform with robust averaging, ensuring that RobAJoL maintains robustness guarantees, satisfies DP, and substantially reduces communication overhead. We further present simulation results on CIFAR-10, Fashion MNIST, and FEMNIST, validating our theoretical claims. We compare RobAJoL with a state-of-the-art communication-efficient and robust FL scheme augmented with DP for a fair comparison, demonstrating that RobAJoL outperforms existing methods in terms of robustness and utility under different Byzantine attacks.
翻译:我们提出了Fed-DPRoC,一种新颖的联邦学习框架,旨在联合提供差分隐私(DP)、拜占庭鲁棒性和通信效率。我们方法的核心是鲁棒兼容压缩的概念,它允许在不削弱聚合鲁棒性的前提下减少双向通信开销。我们将该框架实例化为RobAJoL,它集成了基于Johnson-Lindenstrauss(JL)的压缩机制与用于鲁棒性的鲁棒平均方法。我们的理论分析确立了JL变换与鲁棒平均的兼容性,确保RobAJoL保持鲁棒性保证、满足DP,并显著降低通信开销。我们进一步在CIFAR-10、Fashion MNIST和FEMNIST数据集上提供了仿真结果,验证了我们的理论主张。我们将RobAJoL与一种最先进的、增强DP的通信高效且鲁棒的联邦学习方案进行了公平比较,结果表明在不同拜占庭攻击下,RobAJoL在鲁棒性和效用方面优于现有方法。