Current state-of-the-art anomaly detection (AD) methods exploit the powerful representations yielded by large-scale ImageNet training. However, catastrophic forgetting prevents the successful fine-tuning of pre-trained representations on new datasets in the semi-supervised setting, and representations are therefore commonly fixed. In our work, we propose a new method to overcome catastrophic forgetting and thus successfully fine-tune pre-trained representations for AD in the transfer learning setting. Specifically, we induce a multivariate Gaussian distribution for the normal class based on the linkage between generative and discriminative modeling, and use the Mahalanobis distance of normal images to the estimated distribution as training objective. We additionally propose to use augmentations commonly employed for vicinal risk minimization in a validation scheme to detect onset of catastrophic forgetting. Extensive evaluations on the public MVTec dataset reveal that a new state of the art is achieved by our method in the AD task while simultaneously achieving anomaly segmentation performance comparable to prior state of the art. Further, ablation studies demonstrate the importance of the induced Gaussian distribution as well as the robustness of the proposed fine-tuning scheme with respect to the choice of augmentations.
翻译:目前最先进的异常现象检测(AD)方法利用了大规模图像网络培训产生的强大表现。然而,灾难性的遗忘妨碍了在半监督环境下成功微调对新数据集的预培训前表现进行精细调整,因此这些表现是普遍固定的。在我们的工作中,我们提出了一种新的方法,以克服灾难性的遗忘,从而成功地微调在转移学习环境中对AD进行预培训的演示。具体地说,我们根据基因化和歧视性模型之间的联系,为正常阶级进行多变制高斯西亚分布,并使用正常图像与估计分布的马哈拉诺比距离作为培训目标。我们还建议在鉴定计划中使用通常用于尽量减少致癌风险的增强值,以发现灾难性的遗忘的发生。对公共MVTec数据集的广泛评估表明,我们在适应任务中采用的方法取得了一种新的状态,同时取得了与艺术先前状态相类似的异常分化性表现。此外,对诱导高斯分布的重要性以及拟议的微调计划是否稳健。