Recent research demonstrated that the superficially well-trained machine learning (ML) models are highly vulnerable to adversarial examples. As ML techniques are becoming a popular solution for cyber-physical systems (CPSs) applications in research literatures, the security of these applications is of concern. However, current studies on adversarial machine learning (AML) mainly focus on pure cyberspace domains. The risks the adversarial examples can bring to the CPS applications have not been well investigated. In particular, due to the distributed property of data sources and the inherent physical constraints imposed by CPSs, the widely-used threat models and the state-of-the-art AML algorithms in previous cyberspace research become infeasible. We study the potential vulnerabilities of ML applied in CPSs by proposing Constrained Adversarial Machine Learning (ConAML), which generates adversarial examples that satisfy the intrinsic constraints of the physical systems. We first summarize the difference between AML in CPSs and AML in existing cyberspace systems and propose a general threat model for ConAML. We then design a best-effort search algorithm to iteratively generate adversarial examples with linear physical constraints. We evaluate our algorithms with simulations of two typical CPSs, the power grids and the water treatment system. The results show that our ConAML algorithms can effectively generate adversarial examples which significantly decrease the performance of the ML models even under practical constraints.
翻译:最近的研究显示,表面上受过良好训练的机器学习模式极易受到对抗性实例的影响。随着ML技术正在成为研究文献中网络物理系统应用的流行解决方案,这些应用的安全性令人关切。然而,目前关于对抗性机器学习的研究主要侧重于纯网络空间领域,对敌对性机器学习(AML)的研究主要侧重于纯网络空间领域。对敌对性实例给CPS应用带来的风险没有进行充分调查。特别是由于数据源的分布属性和CPS施加的内在物理限制,广泛使用的威胁模型和以往网络空间研究中最新的AML算法变得不可行。我们通过提议训练Aversarial机器学习(AML),研究CPS应用M的潜在脆弱性,从而产生满足物理系统内在制约的对抗性实例。我们首先总结了CPS和AM现有网络空间系统中的AM和AML之间的差别,并提出CAML系统的一般威胁模型。我们随后设计了一种最精确的搜索算法,甚至反复生成具有线性物理限制的敌对性范例。我们用CML系统模拟了我们C的典型的磁测算结果。