Lidar-based sensing drives current autonomous vehicles. Despite rapid progress, current Lidar sensors still lag two decades behind traditional color cameras in terms of resolution and cost. For autonomous driving, this means that large objects close to the sensors are easily visible, but far-away or small objects comprise only one measurement or two. This is an issue, especially when these objects turn out to be driving hazards. On the other hand, these same objects are clearly visible in onboard RGB sensors. In this work, we present an approach to seamlessly fuse RGB sensors into Lidar-based 3D recognition. Our approach takes a set of 2D detections to generate dense 3D virtual points to augment an otherwise sparse 3D point cloud. These virtual points naturally integrate into any standard Lidar-based 3D detectors along with regular Lidar measurements. The resulting multi-modal detector is simple and effective. Experimental results on the large-scale nuScenes dataset show that our framework improves a strong CenterPoint baseline by a significant 6.6 mAP, and outperforms competing fusion approaches. Code and more visualizations are available at https://tianweiy.github.io/mvp/
翻译:尽管取得了快速的进展,但目前的利达尔传感器在分辨率和成本方面仍然落后于传统的彩色相机20年。 对于自主驾驶来说,这意味着靠近传感器的大型物体很容易可见,但远地或小物体只包含一种或两种测量。这是一个问题,特别是当这些物体最终导致危险时。另一方面,这些物体在RGB传感器上明显可见。在这项工作中,我们提出了一个方法,将RGB传感器无缝地结合到基于利达尔的3D识别中去。我们的方法是用一套2D探测来产生密度大的3D虚拟点来增加原本稀疏的3D点云。这些虚拟点自然地与任何标准的利达尔3D探测器结合,同时进行常规的利达尔测量。由此产生的多式探测器是简单而有效的。大型核传感器的实验结果显示,我们的框架通过一个6.6 mAP改进了一个强大的中心点基线,并超越了相竞争的聚合方法。https://tianweip.giviob.