Edge devices in federated learning usually have much more limited computation and communication resources compared to servers in a data center. Recently, advanced model compression methods, like the Lottery Ticket Hypothesis, have already been implemented on federated learning to reduce the model size and communication cost. However, Backdoor Attack can compromise its implementation in the federated learning scenario. The malicious edge device trains the client model with poisoned private data and uploads parameters to the center, embedding a backdoor to the global shared model after unwitting aggregative optimization. During the inference phase, the model with backdoors classifies samples with a certain trigger as one target category, while shows a slight decrease in inference accuracy to clean samples. In this work, we empirically demonstrate that Lottery Ticket models are equally vulnerable to backdoor attacks as the original dense models, and backdoor attacks can influence the structure of extracted tickets. Based on tickets' similarities between each other, we provide a feasible defense for federated learning against backdoor attacks on various datasets.
翻译:与数据中心的服务器相比,联盟式学习中的边缘设备通常具有非常有限的计算和通信资源。 最近,先进的模型压缩方法,如彩票假说,已经在联合学习中应用,以减少模型大小和通信成本。然而,后门攻击可能损害其在联盟式学习情景中的实施。恶意边缘设备用有毒的私人数据和上传参数对客户模型进行训练,在不知情的隔离优化后将后门嵌入全球共享模型。在推断阶段,带有特定触发器的后门模型将样品归类为一个目标类别,同时显示清理样品的推断准确性略微下降。在这项工作中,我们从经验上表明,彩票模型与原始密度模型一样,同样易受后门攻击的影响,后门攻击可以影响抽取机票的结构。根据门票的相似性,我们提供了一种可行的防御方法,用于对各种数据集的后门攻击进行进式学习。