Recent years have seen a proliferation of research on adversarial machine learning. Numerous papers demonstrate powerful algorithmic attacks against a wide variety of machine learning (ML) models, and numerous other papers propose defenses that can withstand most attacks. However, abundant real-world evidence suggests that actual attackers use simple tactics to subvert ML-driven systems, and as a result security practitioners have not prioritized adversarial ML defenses. Motivated by the apparent gap between researchers and practitioners, this position paper aims to bridge the two domains. We first present three real-world case studies from which we can glean practical insights unknown or neglected in research. Next we analyze all adversarial ML papers recently published in top security conferences, highlighting positive trends and blind spots. Finally, we state positions on precise and cost-driven threat modeling, collaboration between industry and academia, and reproducible research. We believe that our positions, if adopted, will increase the real-world impact of future endeavours in adversarial ML, bringing both researchers and practitioners closer to their shared goal of improving the security of ML systems.
翻译:近些年来,关于对抗性机器学习的研究激增。许多论文展示了对各种机器学习模式的强大算法攻击,以及许多其他论文提出了能够抵御大多数攻击的防御。然而,大量真实世界的证据表明,实际攻击者使用了简单的策略来颠覆由ML驱动的系统,因此,安全从业人员没有将对抗性ML防御列为优先事项。由于研究人员和从业人员之间明显的差距,本立场文件旨在弥合这两个领域。我们首先提出三个真实世界的案例研究,我们可以从中收集在研究中未知或被忽视的实用洞察力。接下来,我们分析了最近在最高安全会议上发表的所有对抗性ML文件,强调了积极的趋势和盲点。最后,我们在精确和成本驱动的威胁建模、产业和学术界之间的合作以及可复制的研究方面表明了立场。我们认为,如果我们的立场被采纳,将会增加未来在对抗性ML中的努力的实际影响,使研究人员和从业人员都更接近改善ML系统安全的共同目标。