Federated learning is a distributed framework for training machine learning models over the data residing at mobile devices, while protecting the privacy of individual users. A major bottleneck in scaling federated learning to a large number of users is the overhead of secure model aggregation across many users. In particular, the overhead of the state-of-the-art protocols for secure model aggregation grows quadratically with the number of users. In this paper, we propose the first secure aggregation framework, named Turbo-Aggregate, that in a network with $N$ users achieves a secure aggregation overhead of $O(N\log{N})$, as opposed to $O(N^2)$, while tolerating up to a user dropout rate of $50\%$. Turbo-Aggregate employs a multi-group circular strategy for efficient model aggregation, and leverages additive secret sharing and novel coding techniques for injecting aggregation redundancy in order to handle user dropouts while guaranteeing user privacy. We experimentally demonstrate that Turbo-Aggregate achieves a total running time that grows almost linear in the number of users, and provides up to $40\times$ speedup over the state-of-the-art protocols with up to $N=200$ users. Our experiments also demonstrate the impact of model size and bandwidth on the performance of Turbo-Aggregate.
翻译:联邦学习是针对移动设备中的数据培训机器学习模型的分布框架,同时保护个人用户的隐私; 将联合学习推广到大量用户的一大瓶颈是许多用户的安全模型集成的间接费用,特别是安全模型集成的最先进的模型集成协议的间接费用与用户数量成交。 在本文件中,我们提议第一个安全集成框架,即名为Turbo-Aggregate,在与美元用户的网络中,安全集成的间接费用为$O(N\log{N}),而不是$O(N_2),同时将用户的辍学率提高到50美元。 Turbo-Aggregate采用高效模型集成的多组循环战略,利用添加式秘密共享和新颖的编码技术来注入聚合冗余,以便处理用户的辍学问题,同时确保用户的隐私。 我们实验性地证明,Turbo-Agregate公司在用户人数上实现了几乎线性增长的时间,同时提供至40美元的Stenglemal-Asal impeal-Speal imations asuralationsupation astial-to the Staltime a State-palmodudustrations