Neural networks are increasingly used for intrusion detection on industrial control systems (ICS). With neural networks being vulnerable to adversarial examples, attackers who wish to cause damage to an ICS can attempt to hide their attacks from detection by using adversarial example techniques. In this work we address the domain specific challenges of constructing such attacks against autoregressive based intrusion detection systems (IDS) in an ICS setting. We model an attacker that can compromise a subset of sensors in a ICS which has a LSTM based IDS. The attacker manipulates the data sent to the IDS, and seeks to hide the presence of real cyber-physical attacks occurring in the ICS. We evaluate our adversarial attack methodology on the Secure Water Treatment system when examining solely continuous data, and on data containing a mixture of discrete and continuous variables. In the continuous data domain our attack successfully hides the cyber-physical attacks requiring 2.87 out of 12 monitored sensors to be compromised on average. With both discrete and continuous data our attack required, on average, 3.74 out of 26 monitored sensors to be compromised.
翻译:神经网络越来越多地被用于对工业控制系统进行侵入探测。随着神经网络易受对抗性实例的影响,希望对工业控制系统造成损害的攻击者可以使用对抗性实例技术,试图掩盖其攻击,使其无法被发现。在这项工作中,我们处理在工业控制系统环境下对自动递减入侵探测系统(IDS)进行此类攻击的具体领域挑战。我们模拟一个攻击者,该攻击者可以使以LSTM为基础的ISDS的一组传感器受损。攻击者操纵了发送给工业控制系统的数据,并试图隐藏在工业控制系统中发生的真实的网络物理攻击。我们评估了我们在安全水处理系统仅持续审查数据时的对抗性攻击方法,以及含有离散和连续变量混合的数据。在连续的数据领域,我们的攻击成功地隐藏了需要平均在12个监测传感器中进行2.87次的网络物理攻击。平均需要利用离散和连续的数据来保护我们的攻击,26个监测传感器中平均需要牺牲3.74次。