Due to their massive success in various domains, deep learning techniques are increasingly used to design network intrusion detection solutions that detect and mitigate unknown and known attacks with high accuracy detection rates and minimal feature engineering. However, it has been found that deep learning models are vulnerable to data instances that can mislead the model to make incorrect classification decisions so-called (adversarial examples). Such vulnerability allows attackers to target NIDSs by adding small crafty perturbations to the malicious traffic to evade detection and disrupt the system's critical functionalities. The problem of deep adversarial learning has been extensively studied in the computer vision domain; however, it is still an area of open research in network security applications. Therefore, this survey explores the researches that employ different aspects of adversarial machine learning in the area of network intrusion detection in order to provide directions for potential solutions. First, the surveyed studies are categorized based on their contribution to generating adversarial examples, evaluating the robustness of ML-based NIDs towards adversarial examples, and defending these models against such attacks. Second, we highlight the characteristics identified in the surveyed research. Furthermore, we discuss the applicability of the existing generic adversarial attacks for the NIDS domain, the feasibility of launching the proposed attacks in real-world scenarios, and the limitations of the existing mitigation solutions.
翻译:由于在各个领域取得了巨大成功,深层次的学习技术正越来越多地用于设计网络入侵探测解决方案,发现并减轻未知和已知袭击,其检测率和特征工程的精确度较高,但发现深层次学习模式容易受数据案例的影响,从而误导模型作出所谓的不正确的分类决定(对抗性实例);这种脆弱性使攻击者能够将目标对准NIDS,为此在恶意交通中增加小巧的手动干扰,以逃避检测并干扰系统的关键功能;深层次的对立学习问题已在计算机视野领域进行了广泛研究;然而,它仍然是网络安全应用领域公开研究的一个领域;因此,这项调查探索了在网络入侵探测领域采用对抗性机器学习的不同方面的研究,以便为潜在的解决办法提供方向;首先,根据调查研究对生成对抗性实例的贡献,评估基于ML的NID对对抗性实例的稳健性,并维护这些模型对此类攻击的抵御能力;第二,我们强调在调查研究中查明的特征;此外,我们讨论了现有通用对抗性攻击性攻击可适用于NIDS领域的实际解决办法,以及拟议发动攻击的可行性。