Federated learning (FL) is a privacy-friendly type of machine learning where devices locally train a model on their private data and typically communicate model updates with a server. In decentralized FL (DFL), peers communicate model updates with each other instead. However, DFL is challenging since (1) the training data possessed by different peers is often non-i.i.d. (i.e., distributed differently between the peers) and (2) malicious, or Byzantine, attackers can share arbitrary model updates with other peers to subvert the training process. We address these two challenges and present Bristle, middleware between the learning application and the decentralized network layer. Bristle leverages transfer learning to predetermine and freeze the non-output layers of a neural network, significantly speeding up model training and lowering communication costs. To securely update the output layer with model updates from other peers, we design a fast distance-based prioritizer and a novel performance-based integrator. Their combined effect results in high resilience to Byzantine attackers and the ability to handle non-i.i.d. classes. We empirically show that Bristle converges to a consistent 95% accuracy in Byzantine environments, outperforming all evaluated baselines. In non-Byzantine environments, Bristle requires 83% fewer iterations to achieve 90% accuracy compared to state-of-the-art methods. We show that when the training classes are non-i.i.d., Bristle significantly outperforms the accuracy of the most Byzantine-resilient baselines by 2.3x while reducing communication costs by 90%.
翻译:联邦学习(FL)是一种方便隐私的机器学习类型, 当地设备用其私人数据培训模型, 并通常用服务器进行模式更新。 在分散的 FL (DFL) 中, 同行相互交流模式更新。 然而, DFL 具有挑战性, 因为(1) 不同同行拥有的培训数据往往不i. i. id. (即, 同行之间分布不同) 和 (2) 恶意, 或 Byzantine, 攻击者可以与其他同行分享任意的模型更新, 以破坏培训进程。 我们应对了这两个挑战, 并展示了Brishantle、 学习应用程序和分散的网络层之间的中间软件。 在分散的 FLLL( DFL) 中, 同行将学习转换为预定式, 并冻结神经网络的非输出层, 大大加快模型培训速度, 降低通信成本。 为了安全地更新输出层, 我们设计了一个快速的远程优先级, 和新的基于性能的聚合器。 它们的综合效果导致Bytantine攻击者高度的恢复能力, i. d. clastle.