This paper proposes anchor pruning for object detection in one-stage anchor-based detectors. While pruning techniques are widely used to reduce the computational cost of convolutional neural networks, they tend to focus on optimizing the backbone networks where often most computations are. In this work we demonstrate an additional pruning technique, specifically for object detection: anchor pruning. With more efficient backbone networks and a growing trend of deploying object detectors on embedded systems where post-processing steps such as non-maximum suppression can be a bottleneck, the impact of the anchors used in the detection head is becoming increasingly more important. In this work, we show that many anchors in the object detection head can be removed without any loss in accuracy. With additional retraining, anchor pruning can even lead to improved accuracy. Extensive experiments on SSD and MS COCO show that the detection head can be made up to 44% more efficient while simultaneously increasing accuracy. Further experiments on RetinaNet and PASCAL VOC show the general effectiveness of our approach. We also introduce `overanchorized' models that can be used together with anchor pruning to eliminate hyperparameters related to the initial shape of anchors. Code and models are available at https://github.com/Mxbonn/anchor_pruning.
翻译:本文提议在以锚为主的单级探测器中进行物体探测的锚线操纵。 虽然运行技术被广泛用于降低神经神经神经网络的计算成本, 但这些技术往往侧重于优化主干网, 通常大多数计算都是这样的。 在这项工作中, 我们展示了一种额外的运行技术, 特别是天体探测的辅助技术: 锚线运行。 由于主干网效率更高, 且在嵌入系统中部署物体探测器的趋势日益增强, 而在嵌入系统中, 非最大抑制等后处理步骤可能是一个瓶颈, 探测头中使用的锚的影响正变得越来越重要。 在这项工作中, 我们显示许多目标探测头中的锚可以在不丢失任何精确的情况下被移除。 随着进一步的再培训, 锚线运行甚至可以提高准确性。 SSD 和 MS COCO 的广泛实验表明, 在同时提高准确性的同时, 将探测器头提高到44% 。 在 Retinnet 和 PCAL VOC 上进一步实验显示了我们的方法的总体有效性。 我们还引入了“ 超紧的” 模型, 可以与锚 Prunchinb_ ASummax 相关。