Knowledge distillation (KD) has shown very promising capabilities in transferring learning representations from large models (teachers) to small models (students). However, as the capacity gap between students and teachers becomes larger, existing KD methods fail to achieve better results. Our work shows that the 'prior knowledge' is vital to KD, especially when applying large teachers. Particularly, we propose the dynamic prior knowledge (DPK), which integrates part of the teacher's features as the prior knowledge before the feature distillation. This means that our method also takes the teacher's feature as `input', not just `target'. Besides, we dynamically adjust the ratio of the prior knowledge during the training phase according to the feature gap, thus guiding the student in an appropriate difficulty. To evaluate the proposed method, we conduct extensive experiments on two image classification benchmarks (i.e. CIFAR100 and ImageNet) and an object detection benchmark (i.e. MS COCO). The results demonstrate the superiority of our method in performance under varying settings. More importantly, our DPK makes the performance of the student model is positively correlated with that of the teacher model, which means that we can further boost the accuracy of students by applying larger teachers. Our codes will be publicly available for the reproducibility.
翻译:知识蒸馏(KD)在将学习表现从大型模型(教师)转变为小型模型(学生)方面表现出非常有希望的能力。然而,随着师生之间能力差距的扩大,现有的KD方法未能取得更好的结果。我们的工作表明,“原始知识”对于KD至关重要,特别是在应用大型教师时。我们特别提议了动态的先前知识(DPK),它将教师的部分特征与先前的特性蒸馏之前的知识结合起来。这意味着我们的方法也把教师的特征作为“投入”,而不仅仅是“目标”。此外,我们还根据特征差距动态调整了培训阶段以前知识的比例,从而在适当的困难中指导学生。为了评估拟议的方法,我们进行了关于两种图像分类基准(即CIFAR100和图像网)和物体检测基准(即MS COCO)的广泛实验。结果表明,我们的方法在不同环境中的绩效优异。更重要的是,我们的DPK使学生模型的绩效与培训阶段的成绩具有积极关联性,从而能够提高教师的准确性。我们用更大的手段来提高教师的准确性。