Deep learning-based time series models are being extensively utilized in engineering and manufacturing industries for process control and optimization, asset monitoring, diagnostic and predictive maintenance. These models have shown great improvement in the prediction of the remaining useful life (RUL) of industrial equipment but suffer from inherent vulnerability to adversarial attacks. These attacks can be easily exploited and can lead to catastrophic failure of critical industrial equipment. In general, different adversarial perturbations are computed for each instance of the input data. This is, however, difficult for the attacker to achieve in real time due to higher computational requirement and lack of uninterrupted access to the input data. Hence, we present the concept of universal adversarial perturbation, a special imperceptible noise to fool regression based RUL prediction models. Attackers can easily utilize universal adversarial perturbations for real-time attack since continuous access to input data and repetitive computation of adversarial perturbations are not a prerequisite for the same. We evaluate the effect of universal adversarial attacks using NASA turbofan engine dataset. We show that addition of universal adversarial perturbation to any instance of the input data increases error in the output predicted by the model. To the best of our knowledge, we are the first to study the effect of the universal adversarial perturbation on time series regression models. We further demonstrate the effect of varying the strength of perturbations on RUL prediction models and found that model accuracy decreases with the increase in perturbation strength of the universal adversarial attack. We also showcase that universal adversarial perturbation can be transferred across different models.
翻译:在工程和制造业广泛使用深层次基于学习的时间序列模型,以进行流程控制和优化、资产监测、诊断和预测性维护。这些模型显示,工业设备剩余有用寿命(RUL)的预测大有改进,但具有固有的对抗性攻击的脆弱性。这些攻击很容易被利用,并可能导致关键工业设备发生灾难性故障。一般而言,对输入数据的每一实例都计算不同的对抗性扰动。然而,由于计算要求较高和无法不间断地获得输入数据,攻击者很难实时达到这一目的。因此,我们提出了普遍对抗性扰动的概念,一种对以RUL预测模型为基础的愚人回归特别不易感知的噪音。攻击者很容易使用通用对抗性扰动性攻击来实时攻击,因为不断获得投入数据以及反复计算敌对性扰动性扰动数据并不是相同的先决条件。我们用美国航天局的调动性强力发动机数据集来评价普遍对抗性攻击模型的效果。我们发现,在任何输入性反扰动性攻击的模型上加固度中加固。我们发现,输入性数据的模型中加固度也是我们预测的结果。