Building 3D perception systems for autonomous vehicles that do not rely on LiDAR is a critical research problem because of the high expense of LiDAR systems compared to cameras and other sensors. Current methods use multi-view RGB data collected from cameras around the vehicle and neurally "lift" features from the perspective images to the 2D ground plane, yielding a "bird's eye view" (BEV) feature representation of the 3D space around the vehicle. Recent research focuses on the way the features are lifted from images to the BEV plane. We instead propose a simple baseline model, where the "lifting" step simply averages features from all projected image locations, and find that it outperforms the current state-of-the-art in BEV vehicle segmentation. Our ablations show that batch size, data augmentation, and input resolution play a large part in performance. Additionally, we reconsider the utility of radar input, which has previously been either ignored or found non-helpful by recent works. With a simple RGB-radar fusion module, we obtain a sizable boost in performance, approaching the accuracy of a LiDAR-enabled system.
翻译:为不依赖激光雷达的自主车辆建造3D感知系统是一个关键的研究问题,因为与相机和其他传感器相比,LIDAR系统的费用很高。目前的方法是使用从车辆周围摄像头收集的多视图 RGB 数据,以及从视觉图像到 2D 地面平面的神经“起动”特征,产生车辆周围3D空间的“鸟眼视图”(BEV)特征表示。最近的研究侧重于这些特征从图像上移到BEV平面的方式。我们相反地提出了一个简单的基线模型,即“起动”步骤只是所有预测图像位置的平均数,并发现它比BEV 车辆分割中目前的艺术状态要强。我们的布局显示,批量尺寸、数据增强和输入分辨率在性能方面起着很大的作用。此外,我们重新考虑了雷达输入的效用,因为最近的工程以前忽视了这些功能,或者发现这些功能没有帮助。我们用一个简单的 RGB-雷达融合模块,我们得到了一个相当的性能提升,接近了LDAR 支撑系统的精确性能。