3D object detection serves as the core basis of the perception tasks in autonomous driving. Recent years have seen the rapid progress of multi-modal fusion strategies for more robust and accurate 3D object detection. However, current researches for robust fusion are all learning-based frameworks, which demand a large amount of training data and are inconvenient to implement in new scenes. In this paper, we propose GOOD, a general optimization-based fusion framework that can achieve satisfying detection without training additional models and is available for any combinations of 2D and 3D detectors to improve the accuracy and robustness of 3D detection. First we apply the mutual-sided nearest-neighbor probability model to achieve the 3D-2D data association. Then we design an optimization pipeline that can optimize different kinds of instances separately based on the matching result. Apart from this, the 3D MOT method is also introduced to enhance the performance aided by previous frames. To the best of our knowledge, this is the first optimization-based late fusion framework for multi-modal 3D object detection which can be served as a baseline for subsequent research. Experiments on both nuScenes and KITTI datasets are carried out and the results show that GOOD outperforms by 9.1\% on mAP score compared with PointPillars and achieves competitive results with the learning-based late fusion CLOCs.
翻译:3D对象检测是自动驾驶感知任务的核心基础。近年来,多模态融合策略在实现更强鲁棒性和准确性的3D对象检测方面取得了快速进展。然而,当前的鲁棒性融合研究都是基于学习的框架,要求大量训练数据,且在新场景中实现不便。因此,本文提出了一种普适性的基于优化的融合框架:GOOD,它可以在不训练额外模型的情况下达到令人满意的检测效果,并可用于2D和3D检测器的任意组合以提高3D检测的准确性和鲁棒性。首先,我们使用相互匹配概率模型来实现三维到二维数据关联。然后,我们设计了一个可以基于匹配结果分别优化不同类型的实例的优化流程。除此之外,还引入了三维多目标跟踪方法来借助前几帧来增强性能。据我们所知,这是第一个基于优化的多模态3D对象检测的晚期融合框架,并可作为后续研究的基准。在nuScenes和KITTI数据集上进行了实验证明,GOOD相对于PointPillars的mAP score提高了9.1%,并且与学习型融合CLOCs的结果相当。