Knowledge distillation (KD) has shown very promising capabilities in transferring learning representations from large models (teachers) to small models (students). However, as the capacity gap between students and teachers becomes larger, existing KD methods fail to achieve better results. Our work shows that the `prior knowledge' is vital to KD, especially when applying large teachers. Particularly, we propose the dynamic prior knowledge (DPK), which integrates part of teacher's features as the prior knowledge before the feature distillation. This means that our method also takes the teacher's feature as `input', not just `target'. Besides, we dynamically adjust the ratio of the prior knowledge during the training phase according to the feature gap, thus guiding the student in an appropriate difficulty. To evaluate the proposed method, we conduct extensive experiments on two image classification benchmarks (i.e. CIFAR100 and ImageNet) and an object detection benchmark (i.e. MS COCO. The results demonstrate the superiority of our method in performance under varying settings. Besides, our DPK makes the performance of the student model positively correlated with that of the teacher model, which means that we can further boost the accuracy of students by applying larger teachers. More importantly, DPK provides a fast solution in teacher model selection for any given model. Our code will be released at \url{https://github.com/Cuibaby/DPK}.
翻译:知识蒸馏(KD)在将学习代表从大型模式(教师)向小型模式(学生)转移学习表现方面表现出非常有希望的能力。然而,随着师生之间能力差距的扩大,现有的KD方法未能取得更好的结果。我们的工作表明,“初级知识”对于KD至关重要,特别是在应用大型教师时。特别是,我们提议了动态的先前知识(DPK),将教师的部分特征与先前的知识融合为特性蒸馏之前的知识。这意味着我们的方法也把教师的特征作为“投入”,而不仅仅是“目标”。此外,我们还根据特征差距动态调整了培训阶段以前知识的比例,从而引导学生在适当的困难中学习。为了评估拟议的方法,我们进行了关于两个图像分类基准(即CIFAR100和图像网)和一个对象检测基准(即MS COCOCO.)的广泛实验,结果显示了我们在不同环境中的绩效的优势。此外,我们的DPK使学生模式的业绩与更多的教师选择模式(即教师/K)的准确性具有积极的关联性。这意味着,我们可以通过快速应用教师的代码来提高教师的准确性。