We develop a new method to detect anomalies within time series, which is essential in many application domains, reaching from self-driving cars, finance, and marketing to medical diagnosis and epidemiology. The method is based on self-supervised deep learning that has played a key role in facilitating deep anomaly detection on images, where powerful image transformations are available. However, such transformations are widely unavailable for time series. Addressing this, we develop Local Neural Transformations(LNT), a method learning local transformations of time series from data. The method produces an anomaly score for each time step and thus can be used to detect anomalies within time series. We prove in a theoretical analysis that our novel training objective is more suitable for transformation learning than previous deep Anomaly detection(AD) methods. Our experiments demonstrate that LNT can find anomalies in speech segments from the LibriSpeech data set and better detect interruptions to cyber-physical systems than previous work. Visualization of the learned transformations gives insight into the type of transformations that LNT learns.
翻译:我们开发了一种在时间序列中探测异常现象的新方法,这在许多应用领域都是必不可少的,从自行驾驶汽车、金融、营销到医疗诊断和流行病学,该方法基于自我监督的深层次学习,这在推动在图像中进行深度异常现象探测方面发挥了关键作用,在图像中可以进行强大的图像转换。然而,这种转换对于时间序列来说是广泛无法使用的。为此,我们开发了本地神经变异(LNT),这是一种从数据中学习时间序列本地变异的方法。该方法为每个时间步骤生成了异常分,因此可用于在时间序列中检测异常现象。我们在理论分析中证明,我们的新培训目标比以往的深入异常现象检测(AD)方法更适合转换学习。我们的实验表明,LIbriSpeech数据集的语音部分中发现异常现象,并且比以前的工作更好地探测到网络物理系统的中断。所学的变异能使人们洞察到LNT所学的变类型。