While Machine Learning (ML) technologies are widely adopted in many mission critical fields to support intelligent decision-making, concerns remain about system resilience against ML-specific security attacks and privacy breaches as well as the trust that users have in these systems. In this article, we present our recent systematic and comprehensive survey on the state-of-the-art ML robustness and trustworthiness from a security engineering perspective, focusing on the problems in system threat analysis, design and evaluation faced in developing practical machine learning applications, in terms of robustness and user trust. Accordingly, we organize the presentation of this survey intended to facilitate the convey of the body of knowledge from this angle. We then describe a metamodel we created that represents the body of knowledge in a standard and visualized way. We further illustrate how to leverage the metamodel to guide a systematic threat analysis and security design process which extends and scales up the classic process. Finally, we propose the future research directions motivated by our findings. Our work differs itself from the existing surveys by (i) exploring the fundamental principles and best practices to support robust and trustworthy ML system development, and (ii) studying the interplay of robustness and user trust in the context of ML systems. We expect this survey provides a big picture for machine learning security practitioners.
翻译:虽然在许多重要任务领域广泛采用机器学习技术,以支持智能决策,但对于系统抵御具体针对ML的安全攻击和侵犯隐私行为的能力以及用户对这些系统的信任,我们依然感到关切。在本篇文章中,我们从安全工程角度介绍了我们最近对最先进的ML稳健性和可信赖性的系统综合调查,重点是系统威胁分析、设计和评价在开发实用机器学习应用程序方面遇到的问题,从稳健性和用户信任性来看。因此,我们组织本次调查的介绍是为了便利从这一角度传播知识。然后,我们描述我们创建的一个模型,它以标准和可视化的方式代表知识体。我们进一步说明如何利用该模型来指导系统的威胁分析和安全设计过程,以扩展和扩展经典过程。最后,我们提出了由我们的调查结果所激发的未来研究方向。我们的工作与现有调查不同,(一) 探讨支持强大和可靠的ML系统开发的基本原则和最佳做法,以及(二) 研究强大和用户信任的相互作用。我们期望在大型安全调查中提供这一图像。