Despite the large body of academic work on machine learning security, little is known about the occurrence of attacks on machine learning systems in the wild. In this paper, we report on a quantitative study with 139 industrial practitioners. We analyze attack occurrence and concern and evaluate statistical hypotheses on factors influencing threat perception and exposure. Our results shed light on real-world attacks on deployed machine learning. On the organizational level, while we find no predictors for threat exposure in our sample, the amount of implement defenses depends on exposure to threats or expected likelihood to become a target. We also provide a detailed analysis of practitioners' replies on the relevance of individual machine learning attacks, unveiling complex concerns like unreliable decision making, business information leakage, and bias introduction into models. Finally, we find that on the individual level, prior knowledge about machine learning security influences threat perception. Our work paves the way for more research about adversarial machine learning in practice, but yields also insights for regulation and auditing.
翻译:尽管在机器学习安全方面有大量的学术工作,但对野生机器学习系统受到攻击的情况却知之甚少。在本文中,我们报告与139个工业从业人员进行的定量研究。我们分析攻击发生情况,关注并评价影响威胁感和暴露的因素的统计假设。我们的结果揭示了对部署的机器学习的现实世界攻击。在组织一级,虽然在抽样中我们没有发现威胁暴露的预测值,但实施防御的量取决于是否面临威胁或预期成为目标的可能性。我们还详细分析实践者对个别机器学习攻击的相关性的答复,揭示了复杂的关切,例如不可靠的决策、商业信息泄漏和将偏见引入模型。最后,我们发现在个人一级,关于机器学习安全学的先前知识会影响威胁感。我们的工作为更多研究对抗机器在实践中的学习铺平了道路,但也为监管和审计提供了见解。