Several moving target defenses (MTDs) to counter adversarial ML attacks have been proposed in recent years. MTDs claim to increase the difficulty for the attacker in conducting attacks by regularly changing certain elements of the defense, such as cycling through configurations. To examine these claims, we study for the first time the effectiveness of several recent MTDs for adversarial ML attacks applied to the malware detection domain. Under different threat models, we show that transferability and query attack strategies can achieve high levels of evasion against these defenses through existing and novel attack strategies across Android and Windows. We also show that fingerprinting and reconnaissance are possible and demonstrate how attackers may obtain critical defense hyperparameters as well as information about how predictions are produced. Based on our findings, we present key recommendations for future work on the development of effective MTDs for adversarial attacks in ML-based malware detection.
翻译:近些年来,有人提议采取若干移动目标防御(MTDs)来对抗对抗性 ML攻击。 MTDs声称通过经常改变某些防御要素,例如通过配置进行循环来增加攻击者发动攻击的难度。为了审查这些主张,我们首次研究了最近几次用于恶意软件检测领域的对抗性ML攻击的MTDs的有效性。在不同的威胁模式下,我们表明可转移性和查询攻击战略可以通过安卓和视窗之间现有的和新的攻击战略,实现大量规避这些防御。我们还表明,指纹和侦察是可能的,并表明攻击者如何获得关键防御超分度计以及预测如何产生的信息。我们根据我们的调查结果,为今后在以ML为基础的恶意软件探测中发展有效的对抗性攻击MTDs的工作提出了重要建议。