The accumulation of time-series data and the absence of labels make time-series Anomaly Detection (AD) a self-supervised deep learning task. Single-normality-assumption-based methods, which reveal only a certain aspect of the whole normality, are incapable of tasks involved with a large number of anomalies. Specifically, Contrastive Learning (CL) methods distance negative pairs, many of which consist of both normal samples, thus reducing the AD performance. Existing multi-normality-assumption-based methods are usually two-staged, firstly pre-training through certain tasks whose target may differ from AD, limiting their performance. To overcome the shortcomings, a deep Contrastive One-Class Anomaly detection method of time series (COCA) is proposed by authors, following the normality assumptions of CL and one-class classification. It treats the original and reconstructed representations as the positive pair of negative-sample-free CL, namely "sequence contrast". Next, invariance terms and variance terms compose a contrastive one-class loss function in which the loss of the assumptions is optimized by invariance terms simultaneously and the "hypersphere collapse" is prevented by variance terms. In addition, extensive experiments on two real-world time-series datasets show the superior performance of the proposed method achieves state-of-the-art.
翻译:时间序列异常检测是一个自我监督的深度学习任务,由于时间序列数据的积累和标签的缺失。基于单一正常假设的方法只能揭示整个正常性的某一方面,无法用于包含大量异常的任务。具体而言,对比学习方法在确定距离负样本时将许多由正常样本组成的负样本视为另一种正常,从而降低了AD性能。现有的基于多个正常性假设的方法通常是两阶段的,首先通过某些任务进行预训练,这些任务的目标可能与AD有所不同,限制了它们的性能。为了克服这些缺点,作者提出了一种深度对比单类时间序列异常检测方法(COCA),遵循对比学习和单类分类的常规性假设。它将原始表示和重建表示视为没有负样本的对比学习中的正样本对,即“序列对比”。接下来,不变性项和变异项组成对比单类损失函数,在其中对假设的损失进行同时优化不变性项,并通过变异项防止“超球体崩溃”。此外,广泛的实验表明,该方法表现卓越,达到了最先进水平,在两个真实世界的时间序列数据集上。