Although machine learning is widely used in practice, little is known about practitioners' understanding of potential security challenges. In this work, we close this substantial gap and contribute a qualitative study focusing on developers' mental models of the machine learning pipeline and potentially vulnerable components. Similar studies have helped in other security fields to discover root causes or improve risk communication. Our study reveals two \facets of practitioners' mental models of machine learning security. Firstly, practitioners often confuse machine learning security with threats and defences that are not directly related to machine learning. Secondly, in contrast to most academic research, our participants perceive security of machine learning as not solely related to individual models, but rather in the context of entire workflows that consist of multiple components. Jointly with our additional findings, these two facets provide a foundation to substantiate mental models for machine learning security and have implications for the integration of adversarial machine learning into corporate workflows, \new{decreasing practitioners' reported uncertainty}, and appropriate regulatory frameworks for machine learning security.
翻译:虽然在实践中广泛使用机器学习,但实践者对潜在安全挑战的理解却鲜为人知。在这项工作中,我们缩小了这一巨大差距,并开展了一项定性研究,重点是开发者对机器学习管道和潜在脆弱组成部分的心理模型。类似的研究在其他安全领域帮助找出了根源或改善了风险沟通。我们的研究揭示了实践者对机器学习安全心理模型的两面。首先,实践者往往把机器学习安全与与与机器学习没有直接关系的威胁和防御混为一谈。第二,与大多数学术研究不同,我们的参与者认为机器学习安全不仅与个人模型有关,而是与由多个组成部分组成的整个工作流程有关。这两个方面与我们的其他研究结果一道,为证实机器学习安全心理模型奠定了基础,并影响将对抗机器学习纳入公司工作流程,\new{dereaking从业者报告的不确定性}以及适当的机器学习安全监管框架。