Recent research has shown that Machine Learning/Deep Learning (ML/DL) models are particularly vulnerable to adversarial perturbations, which are small changes made to the input data in order to fool a machine learning classifier. The Digital Twin, which is typically described as consisting of a physical entity, a virtual counterpart, and the data connections in between, is increasingly being investigated as a means of improving the performance of physical entities by leveraging computational techniques, which are enabled by the virtual counterpart. This paper explores the susceptibility of Digital Twin (DT), a virtual model designed to accurately reflect a physical object using ML/DL classifiers that operate as Cyber Physical Systems (CPS), to adversarial attacks. As a proof of concept, we first formulate a DT of a vehicular system using a deep neural network architecture and then utilize it to launch an adversarial attack. We attack the DT model by perturbing the input to the trained model and show how easily the model can be broken with white-box attacks.
翻译:最近的研究显示,机器学习/深入学习(ML/DL)模型特别容易受到对抗性干扰,这是对输入数据的小小改动,目的是欺骗机器学习分类师。 数字双,通常被描述为由实体、虚拟对应方和两者之间的数据连接组成的数字双,正在越来越多地进行调查,以此作为通过利用虚拟对应方促成的计算技术改善实体业绩的手段。本文探讨了数字双(DT)的易感性。Digital Twin(DT)是一个虚拟模型,旨在精确反映使用ML/DL分类器作为网络物理系统操作的实物物体,以便用来进行攻击。作为概念的证明,我们首先利用深层的神经网络结构来设计一个对立系统DT,然后利用它来发动对抗性攻击。我们通过对经过训练的模型的输入进行渗透来攻击DT模型,并显示该模型很容易被白箱攻击打破。