Existing literature on adversarial Machine Learning (ML) focuses either on showing attacks that break every ML model, or defenses that withstand most attacks. Unfortunately, little consideration is given to the actual \textit{cost} of the attack or the defense. Moreover, adversarial samples are often crafted in the "feature-space", making the corresponding evaluations of questionable value. Simply put, the current situation does not allow to estimate the actual threat posed by adversarial attacks, leading to a lack of secure ML systems. We aim to clarify such confusion in this paper. By considering the application of ML for Phishing Website Detection (PWD), we formalize the "evasion-space" in which an adversarial perturbation can be introduced to fool a ML-PWD -- demonstrating that even perturbations in the "feature-space" are useful. Then, we propose a realistic threat model describing evasion attacks against ML-PWD that are cheap to stage, and hence intrinsically more attractive for real phishers. Finally, we perform the first statistically validated assessment of state-of-the-art ML-PWD against 12 evasion attacks. Our evaluation shows (i) the true efficacy of evasion attempts that are more likely to occur; and (ii) the impact of perturbations crafted in different evasion-spaces. Our realistic evasion attempts induce a statistically significant degradation (3-10% at $p\!<$0.05), and their cheap cost makes them a subtle threat. Notably, however, some ML-PWD are immune to our most realistic attacks ($p$=0.22). Our contribution paves the way for a much needed re-assessment of adversarial attacks against ML systems for cybersecurity.
翻译:有关对抗机器学习(ML)的现有文献侧重于显示破坏每个ML模式的攻击,或者显示能够经受最多攻击的防御。 不幸的是,很少考虑攻击或防御的实际攻击/ Textit{cost} 。 此外,对抗性样品往往在“地貌空间”中制作,因此对价值有疑问。简言之,目前的情况不允许估计对抗性攻击造成的实际威胁,导致缺乏安全的ML系统。我们的目标是在本文中澄清这种混乱。我们考虑应用ML来捕捉网站探测(PWD),从而正式确定“蒸发空间”的“蒸发空间”,在这个空间中,可以引入对抗性侵扰ML-PWD的实际干扰空间。这说明“地貌空间”中的干扰作用是有用的。然后,我们提出了一个现实性的威胁模型,描述对ML-PWD攻击的规避攻击行为,这种攻击行为是便宜到舞台的,因此对真实的更具有吸引力。最后,我们通过统计性地验证了对价位的ML-PWD威胁状态(PWD) 的状态评估,对于12度攻击的激烈的危害性评估,但是,我们的统计性攻击行为也表明,对正确性攻击的尝试需要。