Deep anomaly detection (AD) aims to provide robust and efficient classifiers for one-class and unbalanced settings. However current AD models still struggle on edge-case normal samples and are often unable to keep high performance over different scales of anomalies. Moreover, there currently does not exist a unified framework efficiently covering both one-class and unbalanced learnings. In the light of these limitations, we introduce a new two-stage anomaly detector which memorizes during training multi-scale normal prototypes to compute an anomaly deviation score. First, we simultaneously learn representations and memory modules on multiple scales using a novel memory-augmented contrastive learning. Then, we train an anomaly distance detector on the spatial deviation maps between prototypes and observations. Our model highly improves the state-of-the-art performance on a wide range of object, style and local anomalies with up to 50% error relative improvement on CIFAR-100. It is also the first model to keep high performance across the one-class and unbalanced settings.
翻译:深异常探测( AD) 旨在为单级和不平衡的设置提供强力和高效的分类器。 然而,当前的 AD 模型仍然在边缘情况正常样本上挣扎,往往无法保持不同异常规模的高度性能。 此外,目前还没有一个有效覆盖单级和不平衡学习的统一框架。 鉴于这些局限性,我们引入了一个新的两阶段异常探测器,该探测器在培训多级正常原型以计算异常偏差分时会记住。 首先,我们同时在多个尺度上学习演示和记忆模块,同时使用新颖的记忆放大对比学习。 然后,我们在原型和观测的空间偏差图上培养异常距离探测器。 我们的模型高度改进了各种物体、风格和地方异常的状态性能,使CIFAR-100的相对改进率高达50%。 这也是第一个在单级和不平衡环境中保持高性能的模式。</s>