Due to the numerous advantages of machine learning (ML) algorithms, many applications now incorporate them. However, many studies in the field of image classification have shown that MLs can be fooled by a variety of adversarial attacks. These attacks take advantage of ML algorithms' inherent vulnerability. This raises many questions in the cybersecurity field, where a growing number of researchers are recently investigating the feasibility of such attacks against machine learning-based security systems, such as intrusion detection systems. The majority of this research demonstrates that it is possible to fool a model using features extracted from a raw data source, but it does not take into account the real implementation of such attacks, i.e., the reverse transformation from theory to practice. The real implementation of these adversarial attacks would be influenced by various constraints that would make their execution more difficult. As a result, the purpose of this study was to investigate the actual feasibility of adversarial attacks, specifically evasion attacks, against network-based intrusion detection systems (NIDS), demonstrating that it is entirely possible to fool these ML-based IDSs using our proposed adversarial algorithm while assuming as many constraints as possible in a black-box setting. In addition, since it is critical to design defense mechanisms to protect ML-based IDSs against such attacks, a defensive scheme is presented. Realistic botnet traffic traces are used to assess this work. Our goal is to create adversarial botnet traffic that can avoid detection while still performing all of its intended malicious functionality.
翻译:由于机器学习(ML)算法的诸多优点,许多应用现在都纳入了这些算法。然而,图像分类领域的许多研究表明,MLs可能会被各种对抗性攻击所蒙骗。这些攻击利用了ML算法的内在脆弱性。这在网络安全领域提出了许多问题,因为越来越多的研究人员最近正在调查对机器学习(ML)算法(例如入侵探测系统)安全系统进行这种攻击的可行性。这一研究大多表明,利用从原始数据源提取的特征来欺骗一个模型是可能的,但是它并没有考虑到这类攻击的实际执行情况,即从理论到实践的反向转变。这些对抗性攻击的实际执行情况将受到各种限制的影响,使执行这些攻击更加困难。因此,这项研究的目的是调查对抗性攻击的实际可行性,特别是逃避攻击,针对网络入侵探测系统(NIDS),表明利用我们提议的敌对性计算法来欺骗这些以ML为基础的IDS,同时假设在黑箱设置上尽可能有许多限制。此外,执行这种对抗性攻击的真正防御性攻击计划是用来设计我们真实性攻击的防御性防御性系统。它用来用来进行真正的防御性攻击,而它则是用来进行真正的防御性攻击的防御性攻击。它是用来用来设计。它用来用来用来进行真正的防御性攻击的防制。</s>