We aim at constructing a high performance model for defect detection that detects unknown anomalous patterns of an image without anomalous data. To this end, we propose a two-stage framework for building anomaly detectors using normal training data only. We first learn self-supervised deep representations and then build a generative one-class classifier on learned representations. We learn representations by classifying normal data from the CutPaste, a simple data augmentation strategy that cuts an image patch and pastes at a random location of a large image. Our empirical study on MVTec anomaly detection dataset demonstrates the proposed algorithm is general to be able to detect various types of real-world defects. We bring the improvement upon previous arts by 3.1 AUCs when learning representations from scratch. By transfer learning on pretrained representations on ImageNet, we achieve a new state-of-theart 96.6 AUC. Lastly, we extend the framework to learn and extract representations from patches to allow localizing defective areas without annotations during training.
翻译:我们的目标是建立一个高性能的缺陷检测模型,在没有异常数据的情况下检测出一个图像的未知异常模式。 为此,我们提出一个两阶段框架,用于使用正常的培训数据来建立异常探测器。 我们首先学习自我监督的深层演示,然后根据所学的演示进行基因化的单级分类。 我们通过对来自CutPaste的正常数据进行分类来学习描述,这是一个简单的数据增强战略,在大图像的随机位置上削减图像补丁和粘贴。 我们对MVTec异常探测数据集的实验研究表明,拟议的算法是能够检测各种真实世界缺陷的通用算法。 我们在从零头学习演示时,将3.1AUCs对先前艺术的改进带来。 通过在图像网络上进行预先培训的演示的学习,我们实现了一种新的状态。 最后,我们扩展了框架,从一个大图像的随机位置上学习和提取缩略图。