This paper revisits feature pyramids networks (FPN) for one-stage detectors and points out that the success of FPN is due to its divide-and-conquer solution to the optimization problem in object detection rather than multi-scale feature fusion. From the perspective of optimization, we introduce an alternative way to address the problem instead of adopting the complex feature pyramids - {\em utilizing only one-level feature for detection}. Based on the simple and efficient solution, we present You Only Look One-level Feature (YOLOF). In our method, two key components, Dilated Encoder and Uniform Matching, are proposed and bring considerable improvements. Extensive experiments on the COCO benchmark prove the effectiveness of the proposed model. Our YOLOF achieves comparable results with its feature pyramids counterpart RetinaNet while being $2.5\times$ faster. Without transformer layers, YOLOF can match the performance of DETR in a single-level feature manner with $7\times$ less training epochs. With an image size of $608\times608$, YOLOF achieves 44.3 mAP running at 60 fps on 2080Ti, which is $13\%$ faster than YOLOv4. Code is available at \url{https://github.com/megvii-model/YOLOF}.
翻译:本文对一级探测器的金字塔网络(FPN)进行了回顾,指出FPN的成功是因为它在物体探测而不是多尺度特性融合方面对优化问题采取了分而解的解决办法。从优化的角度来看,我们引入了一种解决问题的替代方法,而不是采用复杂特征金字塔-只使用一等级特性进行检测}。根据简单而有效的解决办法,我们只介绍一级特性(YOLOOF),我们的方法是提出两个关键组成部分,即分解编码和统一匹配,并带来相当大的改进。关于COCOCO基准的广泛实验证明了拟议模型的有效性。我们的YOLOF取得了与其特征金字塔对应的RetinaNet的可比的结果,同时速度更快。没有变压层,YOLOF可以匹配DETR的单级特性性功能性能,比培训少7美元。我们的方法是提出680美元,608美元,YOLOFs 44.3 mAP的图像规模比FF.AFQ_OFQ_VFQ_VFF_FFF_FFF_FQ_FF_FQ_FQ_FQ_Q_Q_FF_Q_Q_Q_Q_Q_Q_FF_FF_FF_FF_Q_FF_Q_Q_Q_FF_FF_FF_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_BAR_Q_Q_Q_Q_Q_Q_Q_BAR_BAR_BAR_BAR_BAR_FF_FF_FF_FF_FF_FF_FF_FF_FF_Q_FF_FF_FF_FF_FF_FF_Q_Q_Q_Q_BAR_FF_FF_FF_Q_Q_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_