Knowledge distillation methods are proved to be promising in improving the performance of neural networks and no additional computational expenses are required during the inference time. For the sake of boosting the accuracy of object detection, a great number of knowledge distillation methods have been proposed particularly designed for object detection. However, most of these methods only focus on feature-level distillation and label-level distillation, leaving the label assignment step, a unique and paramount procedure for object detection, by the wayside. In this work, we come up with a simple but effective knowledge distillation approach focusing on label assignment in object detection, in which the positive and negative samples of student network are selected in accordance with the predictions of teacher network. Our method shows encouraging results on the MSCOCO2017 benchmark, and can not only be applied to both one-stage detectors and two-stage detectors but also be utilized orthogonally with other knowledge distillation methods.
翻译:事实证明,知识蒸馏方法在改善神经网络的性能方面很有希望,在推理期间不需要额外的计算费用。为了提高物体探测的准确性,提出了大量专门为物体探测设计的知识蒸馏方法,然而,这些方法大多只侧重于地平层蒸馏和标签级蒸馏,留下标签分配步骤,这是用路边探测物体的独特和最高程序。在这项工作中,我们提出了一个简单而有效的知识蒸馏方法,侧重于物体探测中的标签分配,根据教师网络的预测选择学生网络的正反抽样。我们的方法显示,MSCOCO2017基准取得了令人鼓舞的结果,不仅可以用于一级探测器和两级探测器,还可以用于其他知识蒸馏方法。