Deep one-class classification variants for anomaly detection learn a mapping that concentrates nominal samples in feature space causing anomalies to be mapped away. Because this transformation is highly non-linear, finding interpretations poses a significant challenge. In this paper we present an explainable deep one-class classification method, Fully Convolutional Data Description (FCDD), where the mapped samples are themselves also an explanation heatmap. FCDD yields competitive detection performance and provides reasonable explanations on common anomaly detection benchmarks with CIFAR-10 and ImageNet. On MVTec-AD, a recent manufacturing dataset offering ground-truth anomaly maps, FCDD sets a new state of the art in the unsupervised setting. Our method can incorporate ground-truth anomaly maps during training and using even a few of these (~5) improves performance significantly. Finally, using FCDD's explanations we demonstrate the vulnerability of deep one-class classification models to spurious image features such as image watermarks.
翻译:用于异常点探测的深层单级分类变体学会了将标称样本集中到地貌空间,从而导致反常点绘制。由于这种转变是高度非线性的,因此发现解释是一个重大挑战。在本文中,我们提出了一个可以解释的深层单级分类方法,即“全面进化数据说明(FCDD)”,其中标定的样本本身也是一种解释性热映射。FCDD具有竞争性的检测性,并提供了与CIFAR-10和图像网共同的异常点探测基准的合理解释。在MVTec-Ad上,最近的一个制造数据集提供了地面真象异常点地图,FCDD在无人监督的环境下设置了艺术的新状态。我们的方法可以在培训过程中,甚至使用其中的少数(~5)地真象异常图,可以显著地改进性能。最后,我们利用FCDDD的解释,展示了深层单级分类模型在图像水印图等虚假图像特征方面的脆弱性。