LiDAR and cameras are two complementary sensors for 3D perception in autonomous driving. LiDAR point clouds have accurate spatial and geometry information, while RGB images provide textural and color data for context reasoning. To exploit LiDAR and cameras jointly, existing fusion methods tend to align each 3D point to only one projected image pixel based on calibration, namely one-to-one mapping. However, the performance of these approaches highly relies on the calibration quality, which is sensitive to the temporal and spatial synchronization of sensors. Therefore, we propose a Dynamic Cross Attention (DCA) module with a novel one-to-many cross-modality mapping that learns multiple offsets from the initial projection towards the neighborhood and thus develops tolerance to calibration error. Moreover, a \textit{dynamic query enhancement} is proposed to perceive the model-independent calibration, which further strengthens DCA's tolerance to the initial misalignment. The whole fusion architecture named Dynamic Cross Attention Network (DCAN) exploits multi-level image features and adapts to multiple representations of point clouds, which allows DCA to serve as a plug-in fusion module. Extensive experiments on nuScenes and KITTI prove DCA's effectiveness. The proposed DCAN outperforms state-of-the-art methods on the nuScenes detection challenge.
翻译:LiDAR 和相机是自动驾驶中3D感知的两个互补传感器。 LiDAR 点云具有准确的空间和几何信息,而 RGB 图像则提供文字和颜色数据,用于背景推理。为了共同开发LIDAR 和相机,现有的聚合方法倾向于将每个3D点与仅一个基于校准的预测图像像素相匹配,即一对一绘图。然而,这些方法的性能高度依赖于校准质量,因为它对感应器的时间和空间同步十分敏感。因此,我们提议建立一个动态交叉注意模块,配有新型的一至多级跨模式绘图,从最初投射到周边的图中学习多重抵消,从而形成对校准错误的容忍度。此外,还提议采用一个\ Textitit{ 动力查询增强仪来感知模型独立校准,以进一步加强DCA对初始误差的耐力。 名为动态交叉注意网络(DCAN) 的整个聚合结构将多级图像特征加以利用,并适应多个点云的图象显示,使DAA- NST-S-CREM-S-S- Scalmas 的模拟模拟模拟模拟模拟模拟模拟模拟模拟模拟模拟模拟模拟模拟测试。