In the last decade, the use of Machine Learning techniques in anomaly-based intrusion detection systems has seen much success. However, recent studies have shown that Machine learning in general and deep learning specifically are vulnerable to adversarial attacks where the attacker attempts to fool models by supplying deceptive input. Research in computer vision, where this vulnerability was first discovered, has shown that adversarial images designed to fool a specific model can deceive other machine learning models. In this paper, we investigate the transferability of adversarial network traffic against multiple machine learning-based intrusion detection systems. Furthermore, we analyze the robustness of the ensemble intrusion detection system, which is notorious for its better accuracy compared to a single model, against the transferability of adversarial attacks. Finally, we examine Detect & Reject as a defensive mechanism to limit the effect of the transferability property of adversarial network traffic against machine learning-based intrusion detection systems.
翻译:在过去十年中,在异常入侵探测系统中使用机器学习技术取得了很大成功,然而,最近的研究表明,一般和深层次的机器学习特别容易受到对抗性攻击,攻击者试图通过提供欺骗性输入来愚弄模型。在首次发现这种弱点的地方,计算机视觉研究显示,旨在欺骗特定模型的对抗性图像可以欺骗其他机器学习模型。在本文中,我们调查对抗性网络通信对多种机器学习性入侵探测系统的可转让性。此外,我们分析了共同入侵探测系统的稳健性。 与单一模型相比,这种系统更加精确,臭名昭著。 与对抗性攻击的可转让性相比,我们检查和拒绝作为一种防御性机制,以限制对抗性网络通信的可转让性对机器学习性入侵探测系统的可转让性影响。