Real-world deep learning models developed for Time Series Forecasting are used in several critical applications ranging from medical devices to the security domain. Many previous works have shown how deep learning models are prone to adversarial attacks and studied their vulnerabilities. However, the vulnerabilities of time series models for forecasting due to adversarial inputs are not extensively explored. While the attack on a forecasting model might aim to deteriorate the performance of the model, it is more effective, if the attack is focused on a specific impact on the model's output. In this paper, we propose a novel formulation of Directional, Amplitudinal, and Temporal targeted adversarial attacks on time series forecasting models. These targeted attacks create a specific impact on the amplitude and direction of the output prediction. We use the existing adversarial attack techniques from the computer vision domain and adapt them for time series. Additionally, we propose a modified version of the Auto Projected Gradient Descent attack for targeted attacks. We examine the impact of the proposed targeted attacks versus untargeted attacks. We use KS-Tests to statistically demonstrate the impact of the attack. Our experimental results show how targeted attacks on time series models are viable and are more powerful in terms of statistical similarity. It is, hence difficult to detect through statistical methods. We believe that this work opens a new paradigm in the time series forecasting domain and represents an important consideration for developing better defenses.
翻译:为时间序列预测开发的实时深度学习模型用于从医疗设备到安全领域等若干关键应用。许多以前的工作都表明深层次学习模型容易发生对抗性攻击并研究其弱点。然而,没有广泛探讨由于对抗性投入而预测的时间序列模型的脆弱性。虽然对一个预测模型的攻击可能旨在恶化模型的性能,但如果攻击侧重于对模型产出的具体影响,则其效果会更高。在本文中,我们提议对时间序列预测模型进行方向、放大和时空定向对立攻击的新配方。这些定向攻击对产出预测的振幅和方向产生具体影响。我们使用计算机视域现有的对抗性攻击技术,将其调整为时间序列。此外,我们提议对一个修改版的“自动预测梯子攻击”进行定向攻击,用于定向攻击;我们用KS-测试来从统计角度展示攻击的影响。我们的实验结果显示,对时间序列模型的定向攻击是如何可行的,对产出预测的方向也会产生具体的影响。我们使用现有的对抗性攻击技术来进行时间序列的调整。此外,我们提出一个更强有力的统计学术语来进行更好的研究。我们从统计学角度来分析一个比较困难的防御。