Knowledge distillation(KD) is a widely-used technique to train compact models in object detection. However, there is still a lack of study on how to distill between heterogeneous detectors. In this paper, we empirically find that better FPN features from a heterogeneous teacher detector can help the student although their detection heads and label assignments are different. However, directly aligning the feature maps to distill detectors suffers from two problems. First, the difference in feature magnitude between the teacher and the student could enforce overly strict constraints on the student. Second, the FPN stages and channels with large feature magnitude from the teacher model could dominate the gradient of distillation loss, which will overwhelm the effects of other features in KD and introduce much noise. To address the above issues, we propose to imitate features with Pearson Correlation Coefficient to focus on the relational information from the teacher and relax constraints on the magnitude of the features. Our method consistently outperforms the existing detection KD methods and works for both homogeneous and heterogeneous student-teacher pairs. Furthermore, it converges faster. With a powerful MaskRCNN-Swin detector as the teacher, ResNet-50 based RetinaNet and FCOS achieve 41.5% and 43.9% mAP on COCO2017, which are 4.1\% and 4.8\% higher than the baseline, respectively.
翻译:知识蒸馏(KD)是一种广泛使用的技术,用于在物体检测中培训紧凑模型。然而,对于不同探测器之间如何蒸馏,仍然缺乏关于如何蒸馏的研究结果。在本文中,我们从经验中发现,不同教师探测器的更好的FPN功能可以帮助学生,尽管其检测头和标签任务不同。然而,直接将特征地图与蒸馏探测器相匹配有两个问题。首先,教师和学生之间的特征规模差异可能给学生施加过于严格的限制。第二,教师模型具有巨大特征的FPN级和渠道可能主导蒸馏损失的梯度,这将超过KD中其他特征的影响,并引入许多噪音。为了解决上述问题,我们建议模仿PearsonCorrelelation Convality的特征,以教师的关联信息为重点,放松对地貌规模的限制。我们的方法始终超越现有的检测KD方法和对同质和混杂学生教师配对的工作。此外,由于强大的MAskNNNSwinSwinSwinSwin探测器和ReAP探测器分别以4.18%和RASONASAS-DENA为基础, 418%和RAS-SANANSNA的基线分别为4.150。