Nowadays, numerous applications incorporate machine learning (ML) algorithms due to their prominent achievements. However, many studies in the field of computer vision have shown that ML can be fooled by intentionally crafted instances, called adversarial examples. These adversarial examples take advantage of the intrinsic vulnerability of ML models. Recent research raises many concerns in the cybersecurity field. An increasing number of researchers are studying the feasibility of such attacks on security systems based on ML algorithms, such as Intrusion Detection Systems (IDS). The feasibility of such adversarial attacks would be influenced by various domain-specific constraints. This can potentially increase the difficulty of crafting adversarial examples. Despite the considerable amount of research that has been done in this area, much of it focuses on showing that it is possible to fool a model using features extracted from the raw data but does not address the practical side, i.e., the reverse transformation from theory to practice. For this reason, we propose a review browsing through various important papers to provide a comprehensive analysis. Our analysis highlights some challenges that have not been addressed in the reviewed papers.
翻译:目前,许多应用都包含机器学习算法,因为它们取得了显著的成就。然而,计算机视觉领域的许多研究表明,故意编造的事例(称为对抗性实例)可以欺骗ML。这些对抗性例子利用了ML模型的内在脆弱性。最近的研究在网络安全领域引起了许多关切。越来越多的研究人员正在研究以ML算法(如入侵探测系统)为基础的对安全系统进行此类攻击的可行性。这种对抗性攻击的可行性将受到各种特定领域的限制的影响。这有可能增加编造对抗性例子的困难。尽管在这一领域已经进行了大量研究,但许多研究的重点是表明,利用原始数据所选的特征来欺骗一个模型是可能的,但却没有解决实际问题,即从理论到实践的反向转变。为此,我们提议通过各种重要文件浏览一项审查,以提供全面分析。我们的分析突出了一些在被审查的论文中尚未涉及的挑战。</s>