Deep Neural Networks have proven to be highly accurate at a variety of tasks in recent years. The benefits of Deep Neural Networks have also been embraced in power grids to detect False Data Injection Attacks (FDIA) while conducting critical tasks like state estimation. However, the vulnerabilities of DNNs along with the distinct infrastructure of cyber-physical-system (CPS) can favor the attackers to bypass the detection mechanism. Moreover, the divergent nature of CPS engenders limitations to the conventional defense mechanisms for False Data Injection Attacks. In this paper, we propose a DNN framework with additional layer which utilizes randomization to mitigate the adversarial effect by padding the inputs. The primary advantage of our method is when deployed to a DNN model it has trivial impact on the models performance even with larger padding sizes. We demonstrate the favorable outcome of the framework through simulation using the IEEE 14-bus, 30-bus, 118-bus and 300-bus systems. Furthermore to justify the framework we select attack techniques that generate subtle adversarial examples that can bypass the detection mechanism effortlessly.
翻译:近年来,深神经网络在各种任务方面证明非常准确。深神经网络的好处也体现在电网中,在进行国家估计等关键任务时,用于检测虚假数据输入攻击(FDIA)的好处也被纳入电网中。然而,DNN的弱点以及网络物理系统(CPS)的独特基础设施可以帮助攻击者绕过探测机制。此外,CPS的不同性质对虚假数据输入攻击的常规防御机制造成了限制。在本文中,我们提议了一个DNN框架,增加一个层,利用随机化来通过打压输入来减轻对抗效应。我们方法的主要优势是,在DNNN模型中部署时,即使使用更大的桨尺寸,也会对模型的性能产生微小的影响。我们通过使用 IEEE 14-Bus、 30-Bus、 118-Bus 和 300-bus系统进行模拟,展示了框架的有利结果。此外,为了证明我们选择的进攻技术能够产生不费力绕过探测机制的微妙对抗实例的框架是合理的。