This paper proposes a general spectral analysis framework that thwarts a security risk in federated Learning caused by groups of malicious Byzantine attackers or colluders, who conspire to upload vicious model updates to severely debase global model performances. The proposed framework delineates the strong consistency and temporal coherence between Byzantine colluders' model updates from a spectral analysis lens, and, formulates the detection of Byzantine misbehaviours as a community detection problem in weighted graphs. The modified normalized graph cut is then utilized to discern attackers from benign participants. Moreover, the Spectral heuristics is adopted to make the detection robust against various attacks. The proposed Byzantine colluder resilient method, i.e., FedCut, is guaranteed to converge with bounded errors. Extensive experimental results under a variety of settings justify the superiority of FedCut, which demonstrates extremely robust model performance (MP) under various attacks. It was shown that FedCut's averaged MP is 2.1% to 16.5% better than that of the state of the art Byzantine-resilient methods. In terms of the worst-case model performance (MP), FedCut is 17.6% to 69.5% better than these methods.
翻译:本文提出一个一般性的光谱分析框架,防止恶意拜占庭袭击者或科卢尔德集团将恶性模型更新上传,以严重贬低全球模型性能。拟议框架从光谱分析透镜中勾勒拜占庭科卢德斯模型更新之间的强烈一致性和时间一致性,并将Byzantine adisebaviours的探测作为加权图表中社区检测问题。随后,修改的普通图形切片用于辨别来自良方参与者的进攻者。此外,还采用光谱超光谱仪使各种攻击的探测更加有力。拟议中的拜占庭峰值抗冲击法,即FedCut,保证与受约束的错误趋同。在各种环境下的广泛实验结果证明FedCut在各种攻击中具有超强的模型性能,表明FedCut的平均MP为2.1%至16.5 %,这些最差的模型性能优于Byzantine-resentine状态(即FedC)17.5%至FedC最差的方法。