Deep learning-based object detection and instance segmentation have achieved unprecedented progress. In this paper, we propose Complete-IoU (CIoU) loss and Cluster-NMS for enhancing geometric factors in both bounding box regression and Non-Maximum Suppression (NMS), leading to notable gains of average precision (AP) and average recall (AR), without the sacrifice of inference efficiency. In particular, we consider three geometric factors, i.e., overlap area, normalized central point distance and aspect ratio, which are crucial for measuring bounding box regression in object detection and instance segmentation. The three geometric factors are then incorporated into CIoU loss for better distinguishing difficult regression cases. The training of deep models using CIoU loss results in consistent AP and AR improvements in comparison to widely adopted $\ell_n$-norm loss and IoU-based loss. Furthermore, we propose Cluster-NMS, where NMS during inference is done by implicitly clustering detected boxes and usually requires less iterations. Cluster-NMS is very efficient due to its pure GPU implementation, , and geometric factors can be incorporated to improve both AP and AR. In the experiments, CIoU loss and Cluster-NMS have been applied to state-of-the-art instance segmentation (e.g., YOLACT), and object detection (e.g., YOLO v3, SSD and Faster R-CNN) models. Taking YOLACT on MS COCO as an example, our method achieves performance gains as +1.7 AP and +6.2 AR$_{100}$ for object detection, and +0.9 AP and +3.5 AR$_{100}$ for instance segmentation, with 27.1 FPS on one NVIDIA GTX 1080Ti GPU. All the source code and trained models are available at https://github.com/Zzh-tju/CIoU
翻译:在本文中,我们提出全IOU(CIOU)损失和CMMS(CMS),以加强约束框回归和非最大修正(NMS)中的几何系数,从而在不牺牲推论效率的情况下,实现平均精确度(AP)和平均召回(AR)的显著增益。特别是,我们考虑三个几何因素,即重叠区域、中央点的正常距离和方位比率,这对于测量物体检测和实例分割中的盒式回归至关重要。然后将三个几何因素纳入CIOU损失,以更好地区分困难回归案例。用CIOU损失结果进行深度模型培训,与广泛采用的美元=0.0损失和以IOUU为基础的损失比较。此外,我们提议采用CMSS(NMS),在推断过程中通过隐含式的集束检测箱进行,通常需要更少的颜色。CMUS(CU),由于实施更清洁的GPU目标,O值为100;OALO(O)的测算和测地因素,可以改进AS-CRU的测算方法。