Federated learning enables distributed training across a set of clients, without requiring any of the participants to reveal their private training data to a centralized entity or each other. Due to the nature of decentralized execution, federated learning is vulnerable to attacks from adversarial (Byzantine) clients by modifying the local updates to their desires. Therefore, it is important to develop robust federated learning algorithms that can defend Byzantine clients without losing model convergence and performance. In the study of robustness problems, a simulator can simplify and accelerate the implementation and evaluation of attack and defense strategies. However, there is a lack of open-source simulators to meet such needs. Herein, we present Blades, a scalable, extensible, and easily configurable simulator to assist researchers and developers in efficiently implementing and validating novel strategies against baseline algorithms in robust federated learning. Blades is built upon a versatile distributed framework Ray, making it effortless to parallelize single machine code from a single CPU to multi-core, multi-GPU, or multi-node with minimal configurations. Blades contains built-in implementations of representative attack and defense strategies and provides user-friendly interfaces to easily incorporate new ideas. We maintain the source code and documents at https://github.com/bladesteam/blades.
翻译:联邦学习有助于在一组客户中进行分散培训,而不需要任何参与者向中央实体或相互披露其私人培训数据。由于分权执行的性质,联邦学习很容易受到敌对(Byzantine)客户的攻击,其方法是根据当地的愿望修改最新消息。因此,必须开发强大的联邦学习算法,在不失去模式趋同和性能的情况下保护拜占庭客户。在研究稳健度问题时,模拟器可以简化和加速攻击和防御战略的实施和评估。然而,缺乏开放源模拟器来满足这种需要。在这里,我们提出布拉德斯,一个可伸缩的、可扩展的和易于配置的模拟器,以协助研究人员和开发者高效地实施和验证新的战略,以对抗稳健的联邦学习中的基线算法。布拉德建于一个分散的框架Ray,使单一CPU与多核心、多GPU或多位配置的单一机器代码的平行化。我们提出刀锋主义的模拟器和多位配置,我们展示了刀锋主义的模拟器,一个可伸缩式的组合。我们提供了新的防御战略,并在用户中提供新的源码。