Federated Learning (FL) allows multiple clients to collaboratively train a model without sharing their private data. However, FL is vulnerable to Byzantine attacks, where adversaries manipulate client models to compromise the federated model, and privacy inference attacks, where adversaries exploit client models to infer private data. Existing defenses against both backdoor and privacy inference attacks introduce significant computational and communication overhead, creating a gap between theory and practice. To address this, we propose ABBR, a practical framework for Byzantine-robust and privacy-preserving FL. We are the first to utilize dimensionality reduction to speed up the private computation of complex filtering rules in privacy-preserving FL. Additionally, we analyze the accuracy loss of vector-wise filtering in low-dimensional space and introduce an adaptive tuning strategy to minimize the impact of malicious models that bypass filtering on the global model. We implement ABBR with state-of-the-art Byzantine-robust aggregation rules and evaluate it on public datasets, showing that it runs significantly faster, has minimal communication overhead, and maintains nearly the same Byzantine-resilience as the baselines.
翻译:联邦学习允许多个客户端在不共享私有数据的情况下协作训练模型。然而,联邦学习易受拜占庭攻击(攻击者通过操纵客户端模型来破坏联邦模型)和隐私推断攻击(攻击者利用客户端模型推断私有数据)的影响。现有的针对后门攻击和隐私推断攻击的防御方法会引入显著的计算和通信开销,导致理论与实践之间存在差距。为解决这一问题,我们提出了ABBR,一个面向拜占庭鲁棒性与隐私保护的实用联邦学习框架。我们首次利用降维技术来加速隐私保护联邦学习中复杂过滤规则的隐私计算。此外,我们分析了低维空间中向量级过滤的精度损失,并引入了一种自适应调整策略,以最小化绕过过滤的恶意模型对全局模型的影响。我们使用最先进的拜占庭鲁棒聚合规则实现了ABBR,并在公共数据集上进行了评估,结果表明其运行速度显著更快,通信开销极小,并且保持了与基线方法几乎相同的拜占庭鲁棒性。