Time series anomaly detection is extensively studied in statistics, economics, and computer science. Over the years, numerous methods have been proposed for time series anomaly detection using deep learning-based methods. Many of these methods demonstrate state-of-the-art performance on benchmark datasets, giving the false impression that these systems are robust and deployable in many practical and industrial real-world scenarios. In this paper, we demonstrate that the performance of state-of-the-art anomaly detection methods is degraded substantially by adding only small adversarial perturbations to the sensor data. We use different scoring metrics such as prediction errors, anomaly, and classification scores over several public and private datasets ranging from aerospace applications, server machines, to cyber-physical systems in power plants. Under well-known adversarial attacks from Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) methods, we demonstrate that state-of-the-art deep neural networks (DNNs) and graph neural networks (GNNs) methods, which claim to be robust against anomalies and have been possibly integrated in real-life systems, have their performance drop to as low as 0%. To the best of our understanding, we demonstrate, for the first time, the vulnerabilities of anomaly detection systems against adversarial attacks. The overarching goal of this research is to raise awareness towards the adversarial vulnerabilities of time series anomaly detectors.
翻译:在统计、经济学和计算机科学中广泛研究时间序列异常现象的探测。多年来,我们提出了许多方法,用深层次的学习方法进行时间序列异常现象的探测。其中许多方法展示了基准数据集的最新性能,给人留下错误印象,认为这些系统在很多实际和工业现实世界的情景中是稳健的和可部署的。在本文件中,我们通过在传感器数据中只增加小的对抗性扰动来证明最先进的异常现象探测方法的性能大大退化。我们使用不同的评分标准,例如预测错误、异常和对从航空航天应用、服务器机器到电厂的网络物理系统等若干公共和私人数据集的分数进行分类。在众所周知的快速梯度信号方法(FGSM)和预测的梯度源(PGD)的对抗性攻击中,我们发现最先进的深层神经网络(DNN)和直线神经网络(GNNN)的性能。我们使用不同的评分标准,这些评分方法声称对异常现象是稳健的,并且可能融入了现实系统,其表现是其真实和物理物理系统中的分数。 在众所周知的对立性攻击中,其性攻击性攻击中,其性攻击中,其性反应性反应性能是,我们最接近于反常态性反应性反应系统。我们所见地显示的反向的反向的反向性能是,我们对立性研究的反向的反向的反向性研究。