As a distributed machine learning paradigm, federated learning (FL) conveys a sense of privacy to contributing participants because training data never leaves their devices. However, gradient updates and the aggregated model still reveal sensitive information. In this work, we propose HyFL, a new framework that combines private training and inference with secure aggregation and hierarchical FL to provide end-to-end protection and facilitate large-scale global deployments. Additionally, we show that HyFL strictly limits the attack surface for malicious participants: they are restricted to data-poisoning attacks and cannot significantly reduce accuracy.
翻译:作为分布式机器学习模式,联邦学习(FL)向参加培训的参与者传达了一种隐私感,因为培训数据从未离开他们的设备,但是,梯度更新和汇总模型仍然显示敏感信息。在这项工作中,我们建议HyFL这个将私人培训和推断与安全的聚合和等级FL相结合的新框架,提供端到端保护,并为大规模全球部署提供便利。此外,我们表明HyFL严格限制恶意参与者的攻击表面:它们仅限于数据渗透攻击,不能显著降低准确性。