Secure model aggregation across many users is a key component of federated learning systems. The state-of-the-art protocols for secure model aggregation, which are based on additive masking, require all users to quantize their model updates to the same level of quantization. This severely degrades their performance due to lack of adaptation to available bandwidth at different users. We propose three schemes that allow secure model aggregation while using heterogeneous quantization. This enables the users to adjust their quantization proportional to their available bandwidth, which can provide a substantially better trade-off between the accuracy of training and the communication time. The proposed schemes are based on a grouping strategy by partitioning the network into groups, and partitioning the local model updates of users into segments. Instead of applying aggregation protocol to the entire local model update vector, it is applied on segments with specific coordination between users. We theoretically evaluate the quantization error for our schemes, and also demonstrate how our schemes can be utilized to overcome Byzantine users.
翻译:许多用户的安全模式汇总是联合学习系统的一个关键组成部分。基于添加面罩的、最先进的安全模式汇总协议要求所有用户将其模型更新量量化到同等的量化水平。由于不同用户对可用带宽缺乏适应性,这严重降低了他们的性能。我们建议了三种方案,允许安全模式汇总,同时使用不同量化。这使得用户能够调整其量化比例,使其与可用带宽成正比,这可以大大改善培训准确性和通信时间之间的权衡。拟议计划的基础是分组战略,将网络分成若干组,将本地用户的模型更新分成若干部分。它不是对整个本地模型更新矢量应用汇总协议,而是在用户之间进行具体协调的部分应用。我们从理论上评估了我们的计划的量化错误,还展示了如何利用我们的计划来克服Byzantine用户。