Over the years, most research towards defenses against adversarial attacks on machine learning models has been in the image recognition domain. The malware detection domain has received less attention despite its importance. Moreover, most work exploring these defenses has focused on several methods but with no strategy when applying them. In this paper, we introduce StratDef, which is a strategic defense system based on a moving target defense approach. We overcome challenges related to the systematic construction, selection, and strategic use of models to maximize adversarial robustness. StratDef dynamically and strategically chooses the best models to increase the uncertainty for the attacker while minimizing critical aspects in the adversarial ML domain, like attack transferability. We provide the first comprehensive evaluation of defenses against adversarial attacks on machine learning for malware detection, where our threat model explores different levels of threat, attacker knowledge, capabilities, and attack intensities. We show that StratDef performs better than other defenses even when facing the peak adversarial threat. We also show that, of the existing defenses, only a few adversarially-trained models provide substantially better protection than just using vanilla models but are still outperformed by StratDef.
翻译:多年来,大多数针对机器学习模型对抗性攻击的防御研究都是在图像识别领域。 恶意软件检测领域尽管重要,但受到的关注较少。 此外, 探索这些防御的大部分工作都集中在几种方法上,但在应用这些方法时没有战略。 在本文中,我们引入了StratDef, 这是一种基于移动目标防御方法的战略防御系统。 我们克服了与系统构建、选择和战略使用模型以最大限度地提高对抗性强力相关的挑战。 StratDef 动态地从战略上选择了最佳模型, 以增加攻击者的不确定性, 同时尽量减少对抗性 ML 领域的关键方面, 如攻击性可转移性。 我们第一次全面评估了对恶意软件检测机器学习的对抗性攻击的防御, 我们的威胁模型探索了不同程度的威胁、攻击者的知识、能力以及攻击强度。 我们显示, StratDef 即使在面临顶峰的对抗性威胁时, 也比其他防御表现得更好。 我们还表明, 现有的防御中,只有少数经过对抗性训练的模型比仅仅使用香草模型提供更好的保护,但仍然比斯特拉外形。