Anomaly detection is typically posited as an unsupervised learning task in the literature due to the prohibitive cost and difficulty to obtain large-scale labeled anomaly data, but this ignores the fact that a very small number (e.g.,, a few dozens) of labeled anomalies can often be made available with small/trivial cost in many real-world anomaly detection applications. To leverage such labeled anomaly data, we study an important anomaly detection problem termed weakly-supervised anomaly detection, in which, in addition to a large amount of unlabeled data, a limited number of labeled anomalies are available during modeling. Learning with the small labeled anomaly data enables anomaly-informed modeling, which helps identify anomalies of interest and address the notorious high false positives in unsupervised anomaly detection. However, the problem is especially challenging, since (i) the limited amount of labeled anomaly data often, if not always, cannot cover all types of anomalies and (ii) the unlabeled data is often dominated by normal instances but has anomaly contamination. We address the problem by formulating it as a pairwise relation prediction task. Particularly, our approach defines a two-stream ordinal regression neural network to learn the relation of randomly sampled instance pairs, i.e., whether the instance pair contains two labeled anomalies, one labeled anomaly, or just unlabeled data instances. The resulting model effectively leverages both the labeled and unlabeled data to substantially augment the training data and learn well-generalized representations of normality and abnormality. Comprehensive empirical results on 40 real-world datasets show that our approach (i) significantly outperforms four state-of-the-art methods in detecting both of the known and previously unseen anomalies and (ii) is substantially more data-efficient.
翻译:异常检测通常被假定为文献中一种不受监督的异常检测任务,原因是成本高得令人望而却步,难以获得大规模标签异常数据,但是这忽略了以下事实:在很多真实世界异常检测应用中,标签异常(例如几十个)往往能以少量/三重成本提供。要利用这些标签异常数据,我们研究一个被称为不甚监督异常检测的重要异常检测问题,其中除了大量未贴标签的异常检测数据外,在建模期间还存在数量有限的标签异常数据。与小标签的异常数据一起学习,可以提供异常知情的模型,这有助于识别兴趣异常现象,并解决在未超过异常检测的检测中臭名昭著的高假阳性。然而,问题尤其具有挑战性,因为(一)标签异常数据数量有限,有时无法覆盖所有类型的异常数据,(二)未贴标签的数据通常由正常实例主宰,但具有异常状态。我们通过将这一问题设计成一个不匹配的定期培训实例来解决这一问题。此外,我们采用这种方式,最终将数据定位为一种网络的双流数据定义。