The incremental diffusion of machine learning algorithms in supporting cybersecurity is creating novel defensive opportunities but also new types of risks. Multiple researches have shown that machine learning methods are vulnerable to adversarial attacks that create tiny perturbations aimed at decreasing the effectiveness of detecting threats. We observe that existing literature assumes threat models that are inappropriate for realistic cybersecurity scenarios because they consider opponents with complete knowledge about the cyber detector or that can freely interact with the target systems. By focusing on Network Intrusion Detection Systems based on machine learning, we identify and model the real capabilities and circumstances required by attackers to carry out feasible and successful adversarial attacks. We then apply our model to several adversarial attacks proposed in literature and highlight the limits and merits that can result in actual adversarial attacks. The contributions of this paper can help hardening defensive systems by letting cyber defenders address the most critical and real issues, and can benefit researchers by allowing them to devise novel forms of adversarial attacks based on realistic threat models.
翻译:支持网络安全机器学习算法的逐渐扩散正在创造新的防御机会,但也带来了新的风险。多项研究表明,机器学习方法很容易受到敌对攻击的伤害,这种攻击造成小的干扰,目的是降低发现威胁的效力。我们观察到,现有文献假定威胁模式不适合现实的网络安全情景,因为他们认为对手完全了解网络探测器,或可以自由地与目标系统互动。通过以机器学习为基础的网络侵入探测系统,我们确定和模拟攻击者进行可行和成功的对抗攻击所需的真正能力和情况。然后,我们运用我们的模型来对付文献中提议的几起对抗性攻击,并突出可能导致实际对抗攻击的限度和优点。本文的贡献有助于强化防御系统,让网络捍卫者处理最关键和现实的问题,并让研究人员能够根据现实的威胁模式设计新的对抗性攻击形式。