BACKGROUND: Machine learning-based security detection models have become prevalent in modern malware and intrusion detection systems. However, previous studies show that such models are susceptible to adversarial evasion attacks. In this type of attack, inputs (i.e., adversarial examples) are specially crafted by intelligent malicious adversaries, with the aim of being misclassified by existing state-of-the-art models (e.g., deep neural networks). Once the attackers can fool a classifier to think that a malicious input is actually benign, they can render a machine learning-based malware or intrusion detection system ineffective. GOAL: To help security practitioners and researchers build a more robust model against adversarial evasion attack through the use of ensemble learning. METHOD: We propose an approach called OMNI, the main idea of which is to explore methods that create an ensemble of "unexpected models"; i.e., models whose control hyperparameters have a large distance to the hyperparameters of an adversary's target model, with which we then make an optimized weighted ensemble prediction. RESULTS: In studies with five adversarial evasion attacks (FGSM, BIM, JSMA, DeepFool and Carlini-Wagner) on five security datasets (NSL-KDD, CIC-IDS-2017, CSE-CIC-IDS2018, CICAndMal2017 and the Contagio PDF dataset), we show that the improvement rate of OMNI's prediction accuracy over attack accuracy is about 53% (median value) across all datasets, with about 18% (median value) loss rate when comparing pre-attack accuracy and OMNI's prediction accuracy. CONCLUSIONWhen using ensemble learning as a defense method against adversarial evasion attacks, we suggest to create ensemble with unexpected models who are distant from the attacker's expected model (i.e., target model) through methods such as hyperparameter optimization.
翻译:以机器学习为基础的安全检测模型在现代恶意软件和入侵探测系统中已经变得很普遍。 但是, 先前的研究显示, 这些模型很容易成为对抗性规避攻击。 在这种类型的攻击中, 投入( 对抗性例子) 是由聪明的恶意对手专门设计的, 目的是被现有最先进的模型( 例如深神经网络) 错误地分类。 一旦攻击者可以欺骗一个分类者, 认为恶意输入实际上是无害的, 它们可以使一个机器学习性的恶意软件或入侵性探测系统无效。 GOAL: 帮助安全从业者和研究人员建立一个更强大的模型, 以对抗对抗对抗敌对性规避攻击。 方法 : 我们提出一个叫做 OMNI 的方法, 其主要想法是探索产生“ 超预期模型” 的方法; e. 模型, 其控制超光度计与20 模型的超光度, 使敌人攻击模型( 与20 的超标值相比我们做了最精确的计算 20 ) 。 RECTSS- 精确的预测 : 与5 IM 数据规避攻击( CRIM- RDM) 数据系统, 以5ODMDMDM) 来显示C- RDM