RetinaNet proposed Focal Loss for classification task and improved one-stage detectors greatly. However, there is still a gap between it and two-stage detectors. We analyze the prediction of RetinaNet and find that the misalignment of classification and localization is the main factor. Most of predicted boxes, whose IoU with ground-truth boxes are greater than 0.5, while their classification scores are lower than 0.5, which shows that the classification task still needs to be optimized. In this paper we proposed an object confidence task for this problem, and it shares features with classification task. This task uses IoUs between samples and ground-truth boxes as targets, and it only uses losses of positive samples in training, which can increase loss weight of positive samples in classification task training. Also the joint of classification score and object confidence will be used to guide NMS. Our method can not only improve classification task, but also ease misalignment of classification and localization. To evaluate the effectiveness of this method, we show our experiments on MS COCO 2017 dataset. Without whistles and bells, our method can improve AP by 0.7% and 1.0% on COCO validation dataset with ResNet50 and ResNet101 respectively at same training configs, and it can achieve 38.4% AP with two times training time. Code is at: http://github.com/chenzuge1/RetinaNet-Conf.git.
翻译:38. 然而,我们分析对RetinaNet的预测,发现主要因素是分类和地方化的偏差。大多数预测箱,其IOU与地面图解框大于0.5,而其分类分数则低于0.5,这表明分类任务仍需要优化,在本文件中,我们为这一问题提出了一个目标信任任务,它与分类任务共享。这项任务在样品和地壳箱之间使用IOU作为目标,它只在培训中使用积极样品的损失,这在分类任务培训中可以增加正样品的重量。此外,还将使用分类分数和对象信任联合来指导NMS。我们的方法不仅可以改进分类任务,而且可以减轻分类和本地化的不匹配。为了评估这一方法的效力,我们向Conchenib展示了我们对MS CO 201717数据设置的实验。没有线和铃声,我们的方法可以在AP 101% 和 ASNBER50 上分别改进0.7 % 和1.0% ASQNBER AS AS ASDRISDADRY 。