Federated learning (FL) has become an emerging machine learning technique lately due to its efficacy in safeguarding the client's confidential information. Nevertheless, despite the inherent and additional privacy-preserving mechanisms (e.g., differential privacy, secure multi-party computation, etc.), the FL models are still vulnerable to various privacy-violating and security-compromising attacks (e.g., data or model poisoning) due to their numerous attack vectors which in turn, make the models either ineffective or sub-optimal. Existing adversarial models focusing on untargeted model poisoning attacks are not enough stealthy and persistent at the same time because of their conflicting nature (large scale attacks are easier to detect and vice versa) and thus, remain an unsolved research problem in this adversarial learning paradigm. Considering this, in this paper, we analyze this adversarial learning process in an FL setting and show that a stealthy and persistent model poisoning attack can be conducted exploiting the differential noise. More specifically, we develop an unprecedented DP-exploited stealthy model poisoning (DeSMP) attack for FL models. Our empirical analysis on both the classification and regression tasks using two popular datasets reflects the effectiveness of the proposed DeSMP attack. Moreover, we develop a novel reinforcement learning (RL)-based defense strategy against such model poisoning attacks which can intelligently and dynamically select the privacy level of the FL models to minimize the DeSMP attack surface and facilitate the attack detection.
翻译:联邦学习(FL)最近由于在保护客户机密信息方面的效力,已成为一种新兴的机械学习技术;然而,尽管存在固有和额外的隐私保护机制(例如,隐私差异、安全的多方计算等),但FL模型仍然容易受到各种侵犯隐私和安全损害攻击(例如,数据或模式中毒)的伤害(例如,数据或模式中毒),因为这些模型的众多攻击矢量,反过来又使模型无效或次最佳,使模型变得无效或次优;现有侧重于非目标模式中毒袭击的敌对模式不够隐蔽和持久性,因为其性质相互冲突(大规模袭击更容易发现,反之反之亦然),因此,在这种对抗性学习模式中,FL模型仍是一个尚未解决的研究问题;考虑到这一点,我们在本文件中分析了这种对抗性学习过程,并表明,可以利用差异噪音来进行隐蔽和持续的模型中毒袭击。更具体地说,我们为FL模型开发了前所未有的DP-开发的偷盗模型(DeSMP)攻击,因为它们具有冲突性质(大范围攻击模式),因此,我们针对F-S级攻击的升级战略的实验性攻击战略可以学习。