Recent advances in artificial intelligence and the increasing need for powerful defensive measures in the domain of network security, have led to the adoption of deep learning approaches for use in network intrusion detection systems. These methods have achieved superior performance against conventional network attacks, which enable the deployment of practical security systems to unique and dynamic sectors. Adversarial machine learning, unfortunately, has recently shown that deep learning models are inherently vulnerable to adversarial modifications on their input data. Because of this susceptibility, the deep learning models deployed to power a network defense could in fact be the weakest entry point for compromising a network system. In this paper, we show that by modifying on average as little as 1.38 of the input features, an adversary can generate malicious inputs which effectively fool a deep learning based NIDS. Therefore, when designing such systems, it is crucial to consider the performance from not only the conventional network security perspective but also the adversarial machine learning domain.
翻译:最近人工智能的进步和在网络安全领域日益需要强有力的防御措施,导致采用深层次的学习方法,用于网络入侵探测系统,这些方法在常规网络袭击中取得了优异的性能,使得能够将实用的安全系统部署到独特和有活力的部门。不幸的是,反向机器学习最近表明深层次的学习模式在本质上很容易对其输入数据进行对立修改。由于这种敏感性,为增强网络防御而部署的深层次学习模式事实上可能是破坏网络系统的最薄弱的切入点。在本文中,我们表明,平均只有1.38个输入特征的修改,对手可以产生恶意的投入,从而有效地愚弄基于NIDS的深层次学习。因此,在设计这种系统时,不仅从常规网络安全角度,而且从对抗机器学习领域来考虑这种表现至关重要。