This paper explores how the current paradigm of vulnerability management might adapt to include machine learning systems through a thought experiment: what if flaws in machine learning (ML) were assigned Common Vulnerabilities and Exposures (CVE) identifiers (CVE-IDs)? We consider both ML algorithms and model objects. The hypothetical scenario is structured around exploring the changes to the six areas of vulnerability management: discovery, report intake, analysis, coordination, disclosure, and response. While algorithm flaws are well-known in the academic research community, there is no apparent clear line of communication between this research community and the operational communities that deploy and manage systems that use ML. The thought experiments identify some ways in which CVE-IDs may establish some useful lines of communication between these two communities. In particular, it would start to introduce the research community to operational security concepts, which appears to be a gap left by existing efforts.
翻译:本文探讨了目前的脆弱性管理模式如何通过思考实验来调整,以纳入机器学习系统:如果机器学习(ML)的缺陷被指定为通用的脆弱性和暴露识别器(CVE-IDs)呢?我们既考虑ML算法,又考虑模型对象。假设情景围绕探索脆弱性管理六个领域的变化:发现、报告接收、分析、协调、披露和反应。虽然算法缺陷在学术研究界是众所周知的,但这一研究界与部署和管理使用ML系统的业务界之间没有明确的沟通线。思考实验确定了CVE-ID可以建立这两个社区之间一些有用的沟通线的某些方式。特别是,它将开始向研究界介绍操作安全概念,这似乎是现有努力留下的差距。