Federated Learning (FL) has become increasingly popular to perform data-driven analysis in cyber-physical critical infrastructures. Since the FL process may involve the client's confidential information, Differential Privacy (DP) has been proposed lately to secure it from adversarial inference. However, we find that while DP greatly alleviates the privacy concerns, the additional DP-noise opens a new threat for model poisoning in FL. Nonetheless, very little effort has been made in the literature to investigate this adversarial exploitation of the DP-noise. To overcome this gap, in this paper, we present a novel adaptive model poisoning technique {\alpha}-MPELM} through which an attacker can exploit the additional DP-noise to evade the state-of-the-art anomaly detection techniques and prevent optimal convergence of the FL model. We evaluate our proposed attack on the state-of-the-art anomaly detection approaches in terms of detection accuracy and validation loss. The main significance of our proposed {\alpha}-MPELM attack is that it reduces the state-of-the-art anomaly detection accuracy by 6.8% for norm detection, 12.6% for accuracy detection, and 13.8% for mix detection. Furthermore, we propose a Reinforcement Learning-based DP level selection process to defend {\alpha}-MPELM attack. The experimental results confirm that our defense mechanism converges to an optimal privacy policy without human maneuver.
翻译:联邦学习联盟(FL)越来越受欢迎,以对网络物理关键基础设施进行数据驱动分析。由于FL进程可能涉及客户的机密信息,最近提出了差异隐私(DP),以防范对抗性推断。然而,我们发现,虽然DP大大缓解了对隐私的关切,但额外的DP-噪音又为FL示范中毒带来了新的威胁。然而,文献中很少努力调查对DP-噪音的这种敌对利用。为了克服这一差距,我们在本文件中提出了一种新的适应性中毒模型技术(Halpha}-MPELM),通过这种技术,攻击者可以利用额外的DP-噪音来逃避最先进的异常探测技术,防止FL模式的最佳趋同。我们评估了对最新异常检测方法在检测精确度和验证损失方面的新威胁。我们提议的以thalpha}-MPLM攻击的主要意义是,它降低了6.8%的状态异常检测精确度,用于标准检测,12.6%的定位为SD-BROM 。我们提出了一种最佳的精确度检测,13-BE 测试。