Gradient-based training in federated learning is known to be vulnerable to faulty/malicious worker nodes, which are often modeled as Byzantine clients. Previous work either makes use of auxiliary data at parameter server to verify the received gradients or leverages statistic-based methods to identify and remove malicious gradients from Byzantine clients. In this paper, we acknowledge that auxiliary data may not always be available in practice and focus on the statistic-based approach. However, recent work on model poisoning attacks have shown that well-crafted attacks can circumvent most of existing median- and distance-based statistical defense methods, making malicious gradients indistinguishable from honest ones. To tackle this challenge, we show that the element-wise sign of gradient vector can provide valuable insight in detecting model poisoning attacks. Based on our theoretical analysis of state-of-the-art attack, we propose a novel approach, \textit{SignGuard}, to enable Byzantine-robust federated learning through collaborative malicious gradient filtering. More precisely, the received gradients are first processed to generate relevant magnitude, sign, and similarity statistics, which are then collaboratively utilized by multiple, parallel filters to eliminate malicious gradients before final aggregation. We further provide theoretical analysis of SignGuard by quantifying its convergence with appropriate choice of learning rate and under non-IID training data. Finally, extensive experiments of image and text classification tasks - including MNIST, Fashion-MNIST, CIFAR-10, and AG-News - are conducted together with recently proposed attacks and defense strategies. The numerical results demonstrate the effectiveness and superiority of our proposed approach.
翻译:联邦学习的渐进式培训众所周知,很容易发生错误/错误的工人节点,这些节点通常以拜占庭客户为模范。以前的工作要么在参数服务器上使用辅助数据来核查收到的梯度,要么利用基于统计数据的方法来查明和清除拜占庭客户的恶性梯度。在本文中,我们承认辅助数据可能并不总是在实践中可以获得,并侧重于基于统计数据的方法。然而,最近关于中毒袭击模型的工作表明,精心策划的袭击可以绕过大多数现有的中位和远程统计防御方法,使恶意梯度无法与诚实者区分。为了应对这一挑战,我们显示,梯度矢量的元素标志可以提供宝贵的洞察模型中毒袭击的洞察力。根据我们对最新袭击的理论分析,我们建议一种新颖的方法,\ textitleitit{SignGuard},通过合作的恶意梯度过滤,使Byzant-robast Federate 学习能够使现有中位和远程统计方法得以规避, 更准确地说, 收到的梯度的梯度首先经过处理, 产生相关的相关范围的梯度的梯度的梯度的梯度,, 并标度, 标度分析显示, 和相似的精确度分析。