Surface defect detection is one of the most essential processes for industrial quality inspection. Deep learning-based surface defect detection methods have shown great potential. However, the well-performed models usually require large training data and can only detect defects that appeared in the training stage. When facing incremental few-shot data, defect detection models inevitably suffer from catastrophic forgetting and misclassification problem. To solve these problems, this paper proposes a new knowledge distillation network, called Dual Knowledge Align Network (DKAN). The proposed DKAN method follows a pretraining-finetuning transfer learning paradigm and a knowledge distillation framework is designed for fine-tuning. Specifically, an Incremental RCNN is proposed to achieve decoupled stable feature representation of different categories. Under this framework, a Feature Knowledge Align (FKA) loss is designed between class-agnostic feature maps to deal with catastrophic forgetting problems, and a Logit Knowledge Align (LKA) loss is deployed between logit distributions to tackle misclassification problems. Experiments have been conducted on the incremental Few-shot NEU-DET dataset and results show that DKAN outperforms other methods on various few-shot scenes, up to 6.65% on the mean Average Precision metric, which proves the effectiveness of the proposed method.
翻译:深层学习的表面缺陷检测方法显示了巨大的潜力。然而,完善的模型通常需要大量的培训数据,并且只能探测培训阶段出现的缺陷。在面临递增的少发数据时,缺陷检测模型必然会遭受灾难性的遗忘和分类错误问题。为了解决这些问题,本文件提议建立一个新的知识蒸馏网络,称为“双重知识对齐网络 ” 。提议的DKAN方法遵循了一种预培训-调整转移学习模式,并设计了一个知识蒸馏框架,以便进行微调。具体地说,建议增加RCNNN,以达到不同类别的分解稳定特征。在此框架下,设计了一个精度知识对称(FKKA)损失,在处理灾难性遗忘问题的等级特征地图之间,而一个Logit Know Align (LKA) 损失则在逻辑分布之间部署,以解决分类错误的问题。在微小发的NEUU-DET数据集和知识蒸馏框架上进行了实验,结果显示DKAN超越了不同类别的稳定特征。在这个框架内,“平均”“6.65”为各种图像确定方法。