Perception systems in modern autonomous driving vehicles typically take inputs from complementary multi-modal sensors, e.g., LiDAR and cameras. However, in real-world applications, sensor corruptions and failures lead to inferior performances, thus compromising autonomous safety. In this paper, we propose a robust framework, called MetaBEV, to address extreme real-world environments involving overall six sensor corruptions and two extreme sensor-missing situations. In MetaBEV, signals from multiple sensors are first processed by modal-specific encoders. Subsequently, a set of dense BEV queries are initialized, termed meta-BEV. These queries are then processed iteratively by a BEV-Evolving decoder, which selectively aggregates deep features from either LiDAR, cameras, or both modalities. The updated BEV representations are further leveraged for multiple 3D prediction tasks. Additionally, we introduce a new M2oE structure to alleviate the performance drop on distinct tasks in multi-task joint learning. Finally, MetaBEV is evaluated on the nuScenes dataset with 3D object detection and BEV map segmentation tasks. Experiments show MetaBEV outperforms prior arts by a large margin on both full and corrupted modalities. For instance, when the LiDAR signal is missing, MetaBEV improves 35.5% detection NDS and 17.7% segmentation mIoU upon the vanilla BEVFusion model; and when the camera signal is absent, MetaBEV still achieves 69.2% NDS and 53.7% mIoU, which is even higher than previous works that perform on full-modalities. Moreover, MetaBEV performs fairly against previous methods in both canonical perception and multi-task learning settings, refreshing state-of-the-art nuScenes BEV map segmentation with 70.4% mIoU.
翻译:现代自动驾驶车辆中的感知系统通常接收来自互补的多模式传感器(例如LiDAR和相机)的输入。然而,在实际的应用场景中,传感器故障和失效会导致性能较差,从而危及自主安全。在本文中,我们提出了一个强大的框架MetaBEV,以解决在整体六个传感器故障和两种极端缺失传感器的情况下的极端真实世界环境 。在MetaBEV中,多个传感器的信号首先由模态特定的编码器进行处理。随后,一组密集的BEV查询被初始化,称为元BEV。然后,这些查询被BEV发展器迭代地处理,从LiDAR、相机或这两个模态中选择性地聚合深度特征。更新后的BEV表示还被用于多个3D预测任务。此外,我们引入了一种新的M2oE结构,以减轻多任务联合学习中不同任务的性能下降。最后,将在nuScenes数据集上评估MetaBEV,该数据集包括3D物体检测和BEV地图分割任务。实验表明,MetaBEV在完整和受损模态上均优于以前的方法。例如,当LiDAR信号丢失时,MetaBEV在检测NDS和分割mIoU上比vanilla BEVFusion模型提高了35.5%和17.7%;当相机信号丢失时,MetaBEV仍然达到了69.2%的NDS和53.7%的mIoU,这甚至比几个在完整模态上运行的以前的方法都要高。此外,在规范感知和多任务学习设置中,MetaBEV表现相当,以70.4%的mIoU刷新了最新的nuScenes BEV地图分割技术。