Byzantine-robust federated learning aims at mitigating Byzantine failures during the federated training process, where malicious participants may upload arbitrary local updates to the central server to degrade the performance of the global model. In recent years, several robust aggregation schemes have been proposed to defend against malicious updates from Byzantine clients and improve the robustness of federated learning. These solutions were claimed to be Byzantine-robust, under certain assumptions. Other than that, new attack strategies are emerging, striving to circumvent the defense schemes. However, there is a lack of systematic comparison and empirical study thereof. In this paper, we conduct an experimental study of Byzantine-robust aggregation schemes under different attacks using two popular algorithms in federated learning, FedSGD and FedAvg . We first survey existing Byzantine attack strategies and Byzantine-robust aggregation schemes that aim to defend against Byzantine attacks. We also propose a new scheme, ClippedClustering , to enhance the robustness of a clustering-based scheme by automatically clipping the updates. Then we provide an experimental evaluation of eight aggregation schemes in the scenario of five different Byzantine attacks. Our results show that these aggregation schemes sustain relatively high accuracy in some cases but are ineffective in others. In particular, our proposed ClippedClustering successfully defends against most attacks under independent and IID local datasets. However, when the local datasets are Non-IID, the performance of all the aggregation schemes significantly decreases. With Non-IID data, some of these aggregation schemes fail even in the complete absence of Byzantine clients. We conclude that the robustness of all the aggregation schemes is limited, highlighting the need for new defense strategies, in particular for Non-IID datasets.
翻译:Byzantine-robust 联盟式学习旨在减少联邦培训过程中的Byzantine-robust 失败, 恶意参与者可能会向中央服务器上传任意的当地更新信息, 以降低全球模型的性能。 近年来, 提出了几项强有力的汇总计划, 以防范Byzantine客户的恶意更新, 并增强联邦学习的稳健性。 这些解决方案在某些假设下被称为Byzantine-robust 。 除此之外, 新的袭击战略正在出现, 试图绕过所有防御计划。 然而, 缺乏这方面的系统比较和经验研究。 在本文中, 我们利用Federated学习、 FedSGD和FedAvg 的两种流行算法, 对不同攻击情况下的Byzantine-robustet 集合计划进行了实验性研究。 我们提出的新计划, 甚至是独立战略的Clistock Cluster, 也缺乏任何基于集群的组合计划。 通过自动剪释最新版本, 我们的不起作用的数据计划 将一些不起作用的数据 。