Federated learning enables decentralized model training without sharing raw data, preserving data privacy. However, its vulnerability towards critical security threats, such as gradient inversion and model poisoning by malicious clients, remain unresolved. Existing solutions often address these issues separately, sacrificing either system robustness or model accuracy. This work introduces Tazza, a secure and efficient federated learning framework that simultaneously addresses both challenges. By leveraging the permutation equivariance and invariance properties of neural networks via weight shuffling and shuffled model validation, Tazza enhances resilience against diverse poisoning attacks, while ensuring data confidentiality and high model accuracy. Comprehensive evaluations on various datasets and embedded platforms show that Tazza achieves robust defense with up to 6.7x improved computational efficiency compared to alternative schemes, without compromising performance.
翻译:联邦学习支持去中心化模型训练而无需共享原始数据,从而保护数据隐私。然而,其面临的关键安全威胁(如恶意客户端的梯度反演和模型投毒)仍未得到解决。现有方案通常单独处理这些问题,牺牲了系统鲁棒性或模型准确性。本研究提出Tazza,一个安全高效的联邦学习框架,可同时应对这两类挑战。通过权重重排和重排模型验证,利用神经网络的置换等变性和不变性特性,Tazza增强了对多种投毒攻击的防御能力,同时确保数据机密性和高模型精度。在不同数据集和嵌入式平台上的综合评估表明,Tazza实现了鲁棒防御,与替代方案相比计算效率提升高达6.7倍,且未影响性能。