Nowadays, Deep Neural Networks (DNNs) report state-of-the-art results in many machine learning areas, including intrusion detection. Nevertheless, recent studies in computer vision have shown that DNNs can be vulnerable to adversarial attacks that are capable of deceiving them into misclassification by injecting specially crafted data. In security-critical areas, such attacks can cause serious damage; therefore, in this paper, we examine the effect of adversarial attacks on deep learning-based intrusion detection. In addition, we investigate the effectiveness of adversarial training as a defense against such attacks. Experimental results show that with sufficient distortion, adversarial examples are able to mislead the detector and that the use of adversarial training can improve the robustness of intrusion detection.
翻译:目前,深神经网络(DNNS)报告许多机器学习领域的最新成果,包括入侵探测,然而,最近对计算机视觉的研究显示,DNNS可能容易受到对抗性攻击的伤害,这种攻击有可能通过输入专门制作的数据而使其被误解为错误的分类。在安全关键地区,这种攻击可能造成严重的损害;因此,在本文件中,我们研究了对抗性攻击对深学习入侵探测的影响。此外,我们调查了对抗性训练作为抵御这种攻击的防御手段的有效性。实验结果表明,如果有足够的扭曲,对抗性例子能够误导探测器,使用对抗性训练可以提高入侵探测的稳健性。