Deep anomaly detection aims to separate anomaly from normal samples with high-quality representations. Pretrained features bring effective representation and promising anomaly detection performance. However, with one-class training data, adapting the pretrained features is a thorny problem. Specifically, the existing optimization objectives with global target often lead to pattern collapse, i.e. all inputs are mapped to the same. In this paper, we propose a novel adaptation framework including simple linear transformation and self-attention. Such adaptation is applied on a specific input, and its k nearest representations of normal samples in pretrained feature space and the inner-relationship between similar one-class semantic features are mined. Furthermore, based on such framework, we propose an effective constraint term to avoid learning trivial solution. Our simple adaptive projection with pretrained features(SAP2) yields a novel anomaly detection criterion which is more accurate and robust to pattern collapse. Our method achieves state-of-the-art anomaly detection performance on semantic anomaly detection and sensory anomaly detection benchmarks including 96.5% AUROC on CIFAR-100 dataset, 97.0% AUROC on CIFAR-10 dataset and 88.1% AUROC on MvTec dataset.
翻译:深度异常点检测旨在将异常现象与具有高质量表现的正常样本分离开来; 预先训练的特征带来有效的代表性和有希望的异常现象检测性表现; 然而,使用单级训练数据,调整预先训练的特征是一个棘手的问题。 具体地说,现有具有全球目标的优化目标往往会导致模式崩溃,即所有输入都映射到同一位置。 在本文中,我们建议了一个新的适应框架,包括简单的线性转换和自我注意。这种适应应用适用于特定输入,在预先训练的特征空间和类似单级语义特征之间的内缘关系中,通常样本的近距离表示出其正常的表示; 此外,根据这种框架,我们提出了一个有效的限制术语,以避免学习微不足道的解决方案。 我们的简单适应性预测带有预先训练的特征(SAP2),产生了一种新的异常现象检测标准,对于模式崩溃来说更加准确和有力。 我们的方法在语义异常点检测和感官异常点检测基准方面达到了最先进的异常现象表现,包括CFAR-100数据集的96.5% AUROC、CIFAR-10数据集的97.0% AUROC和MvTec数据集的881% AUROC。