In this study, we focus on the impact of adversarial attacks on deep learning-based anomaly detection in CPS networks and implement a mitigation approach against the attack by retraining models using adversarial samples. We use the Bot-IoT and Modbus IoT datasets to represent the two CPS networks. We train deep learning models and generate adversarial samples using these datasets. These datasets are captured from IoT and Industrial IoT (IIoT) networks. They both provide samples of normal and attack activities. The deep learning model trained with these datasets showed high accuracy in detecting attacks. An Artificial Neural Network (ANN) is adopted with one input layer, four intermediate layers, and one output layer. The output layer has two nodes representing the binary classification results. To generate adversarial samples for the experiment, we used a function called the `fast_gradient_method' from the Cleverhans library. The experimental result demonstrates the influence of FGSM adversarial samples on the accuracy of the predictions and proves the effectiveness of using the retrained model to defend against adversarial attacks.
翻译:在这项研究中,我们侧重于对CPS网络中深层次学习异常现象探测的对抗性攻击的影响,对使用对抗性样品的再培训模型进行攻击采取减缓方法;我们使用Bot-IoT和Modbus IoT数据集代表两个CPS网络;我们训练深层次学习模型,并利用这些数据集生成对抗性样品;这些数据集来自IoT和工业IoT(IIoT)网络,它们都提供正常和攻击活动的样本;用这些数据集训练的深层次学习模型显示,侦察攻击的准确性很高;人造神经网络(ANN)采用一个输入层、四个中间层和一个输出层;输出层有两个节点代表二元分类结果;为进行实验生成对抗性样品,我们使用了克莱弗汉斯图书馆的“快速梯度”功能;实验结果显示FGSM对抗性样品对预测的准确性影响,并证明使用再训练模型防御敌对性攻击的有效性。