Secure aggregation is a popular protocol in privacy-preserving federated learning, which allows model aggregation without revealing the individual models in the clear. On the other hand, conventional secure aggregation protocols incur a significant communication overhead, which can become a major bottleneck in real-world bandwidth-limited applications. Towards addressing this challenge, in this work we propose a lightweight gradient sparsification framework for secure aggregation, in which the server learns the aggregate of the sparsified local model updates from a large number of users, but without learning the individual parameters. Our theoretical analysis demonstrates that the proposed framework can significantly reduce the communication overhead of secure aggregation while ensuring comparable computational complexity. We further identify a trade-off between privacy and communication efficiency due to sparsification. Our experiments demonstrate that our framework reduces the communication overhead by up to 7.8x, while also speeding up the wall clock training time by 1.13x, when compared to conventional secure aggregation benchmarks.
翻译:安全聚合是保护隐私联合学习中的一项流行协议,它允许模型集成,而不会暴露单个模型的清晰度。另一方面,常规安全聚合协议产生了重要的通信管理费用,这可能成为现实世界带宽限制应用中的一个主要瓶颈。 为了应对这一挑战,我们在这项工作中提议了一个安全聚合的轻度梯度封闭化框架,服务器在其中从大量用户那里学习封闭式本地模型更新的汇总,但不学习个别参数。我们的理论分析表明,拟议的框架可以大大减少安全聚合的通信管理费用,同时确保可比的计算复杂性。我们进一步确定隐私与通信效率之间的权衡,因为宽度化。我们的实验表明,我们的框架将通信管理减少7.8x,同时将墙上培训时间加快到1.13x,而与常规的安全汇总基准相比,还要加快1.13x。