Recent research efforts indicate that federated learning (FL) systems are vulnerable to a variety of security breaches. While numerous defense strategies have been suggested, they are mainly designed to counter specific attack patterns and lack adaptability, rendering them less effective when facing uncertain or adaptive threats. This work models adversarial FL as a Bayesian Stackelberg Markov game (BSMG) between the defender and the attacker to address the lack of adaptability to uncertain adaptive attacks. We further devise an effective meta-learning technique to solve for the Stackelberg equilibrium, leading to a resilient and adaptable defense. The experiment results suggest that our meta-Stackelberg learning approach excels in combating intense model poisoning and backdoor attacks of indeterminate types.
翻译:近期研究表明,联邦学习(FL)系统易受多种安全威胁。尽管已有诸多防御策略被提出,但这些策略主要针对特定攻击模式设计,缺乏适应性,在面对不确定或自适应攻击时效果有限。本研究将对抗性联邦学习建模为防御方与攻击方之间的贝叶斯Stackelberg马尔可夫博弈(BSMG),以解决对不确定自适应攻击适应不足的问题。我们进一步设计了一种有效的元学习技术来求解Stackelberg均衡,从而构建出具有韧性与适应性的防御机制。实验结果表明,我们的元Stackelberg学习方法在抵御类型不确定的强模型投毒攻击和后门攻击方面表现优异。