Federated learning allows multiple participants to collaboratively train an efficient model without exposing data privacy. However, this distributed machine learning training method is prone to attacks from Byzantine clients, which interfere with the training of the global model by modifying the model or uploading the false gradient. In this paper, we propose a novel serverless federated learning framework Committee Mechanism based Federated Learning (CMFL), which can ensure the robustness of the algorithm with convergence guarantee. In CMFL, a committee system is set up to screen the uploaded local gradients. The committee system selects the local gradients rated by the elected members for the aggregation procedure through the selection strategy, and replaces the committee member through the election strategy. Based on the different considerations of model performance and defense, two opposite selection strategies are designed for the sake of both accuracy and robustness. Extensive experiments illustrate that CMFL achieves faster convergence and better accuracy than the typical Federated Learning, in the meanwhile obtaining better robustness than the traditional Byzantine-tolerant algorithms, in the manner of a decentralized approach. In addition, we theoretically analyze and prove the convergence of CMFL under different election and selection strategies, which coincides with the experimental results.
翻译:联邦学习联合会(CMFL)让多个参与者合作培训高效模型,而不会暴露数据隐私。然而,这种分散的机器学习培训方法很容易受到拜占庭客户的攻击,这通过修改模型或上传假梯度干扰了全球模型的培训。在本文中,我们提出一个新的没有服务器的联合会学习框架框架委员会机制(CMFL),这可以确保算法的稳健性和趋同保证。在CMFL中,设立了一个委员会系统来筛选上传的当地梯度。委员会系统通过选择战略选择当选成员为汇总程序评定的当地梯度,并通过选举战略取代委员会成员。根据对模型业绩和防御的不同考虑,设计了两种相反的选择战略是为了既准确又稳健。广泛的实验表明,CMFL比典型的联邦学习联合会(CMFL)取得更快的趋同和更准确性,同时以分散的方式比传统的Byzantine容忍算法更稳健。此外,我们从理论上分析和证明CMFL在不同的选举和选择战略下与实验结果一致。