In this article, we propose the Artificial Intelligence Security Taxonomy to systematize the knowledge of threats, vulnerabilities, and security controls of ML-based systems. We first classify the damage caused by attacks against ML-based systems, define ML-specific security, and discuss its characteristics. Next, we enumerate all relevant assets and stakeholders and provide a general taxonomy for ML-specific threats. Then, we collect a wide range of security controls against ML-specific threats through an extensive review of recent literature. Finally, we classify the vulnerabilities and controls of an ML-based system in terms of each vulnerable asset in the system's entire lifecycle.
翻译:在本条中,我们建议采用人工情报安全分类法,系统化对以ML为基础的系统的威胁、弱点和安全控制的知识;我们首先对以ML为基础的系统的攻击所造成的损害进行分类,界定以ML为基础的安全,并讨论其特点;然后,我们列举所有相关资产和利益攸关方,并为以ML为特定威胁提供一个总体分类法;然后,我们通过广泛审查最近的文献,收集对以ML为主的系统的各种具体威胁的安全控制;最后,我们按照该系统整个生命周期中的每一种脆弱资产,对以ML为基础的系统的弱点和控制进行分类。