Owing to the low communication costs and privacy-promoting capabilities, Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients. However, with the distributed architecture, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training. In this paper, we model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk. Specifically, we first investigate the impact on the models caused by unreliable clients by deriving a convergence upper bound on the loss function based on the gradient descent updates. Our theoretical bounds reveal that with a fixed amount of total computational resources, there exists an optimal number of local training iterations in terms of convergence performance. We further design a novel defensive mechanism, named deep neural network based secure aggregation (DeepSA). Our experimental results validate our theoretical analysis. In addition, the effectiveness of DeepSA is verified by comparing with other state-of-the-art defensive mechanisms.
翻译:由于通信成本低和隐私促进能力低,联邦学习联合会(FL)已成为在分布式客户中培训有效机器学习模式的一个很有希望的工具,然而,随着分布式结构,不可靠的客户可以将低质量模式上传到聚合器服务器,导致培训的退化甚至崩溃。在本文中,我们模拟客户的这些不可靠行为,并提出一个防御机制来减轻这种安全风险。具体地说,我们首先调查不可靠的客户通过根据梯度下降更新数据得出损失功能上限的趋同对模型的影响。我们的理论界限显示,在固定数量的计算资源中,当地培训的整合性能达到最佳数量。我们进一步设计了一个新的防御机制,称为以安全聚合为基础的深神经网络(DeepSA)。我们的实验结果证实了我们的理论分析。此外,通过与其他最先进的防御机制进行比较,可以核实DeepSA的有效性。