Fusing LiDAR and camera information is essential for achieving accurate and reliable 3D object detection in autonomous driving systems. This is challenging due to the difficulty of combining multi-granularity geometric and semantic features from two drastically different modalities. Recent approaches aim at exploring the semantic densities of camera features through lifting points in 2D camera images (referred to as seeds) into 3D space, and then incorporate 2D semantics via cross-modal interaction or fusion techniques. However, depth information is under-investigated in these approaches when lifting points into 3D space, thus 2D semantics can not be reliably fused with 3D points. Moreover, their multi-modal fusion strategy, which is implemented as concatenation or attention, either can not effectively fuse 2D and 3D information or is unable to perform fine-grained interactions in the voxel space. To this end, we propose a novel framework with better utilization of the depth information and fine-grained cross-modal interaction between LiDAR and camera, which consists of two important components. First, a Multi-Depth Unprojection (MDU) method with depth-aware designs is used to enhance the depth quality of the lifted points at each interaction level. Second, a Gated Modality-Aware Convolution (GMA-Conv) block is applied to modulate voxels involved with the camera modality in a fine-grained manner and then aggregate multi-modal features into a unified space. Together they provide the detection head with more comprehensive features from LiDAR and camera. On the nuScenes test benchmark, our proposed method, abbreviated as MSMDFusion, achieves state-of-the-art 3D object detection results with 71.5% mAP and 74.0% NDS, and strong tracking results with 74.0% AMOTA without using test-time-augmentation and ensemble techniques. The code is available at https://github.com/SxJyJay/MSMDFusion.
翻译:在自动驱动系统中,使用激光雷达和相机信息是实现准确和可靠的 3D 目标特性在自动驱动系统中进行精确和可靠的 3D 自动驱动系统中检测的关键。这具有挑战性,因为从两种截然不同的模式中,很难将多感光度的几何和语义特征结合起来。最近的方法旨在通过将2D 相机图像(称为种子)的提升点探索摄像功能的语义密度,然后通过跨式互动或聚合技术将2D语义词解调纳入3D 空间。然而,当将点升到 3D 空间时,这些方法的深度信息调查不足,因此2D 语义无法与 3D 点相可靠地结合。此外,它们的多感光学聚合战略,作为调或关注,要么无法有效地将 2D 和 3D 图像(称为种子) 图像图像,要么无法在3DMD 空间中进行精密的交互式互动。 为此,我们提出了一个新的框架,利用深度信息以及精密的多感官/多感光学和相机,这包括两个重要组成部分。首先,多感光学-DD-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-M-S 将一个在二层-S 的深度测试-l-S-l-l-l-l-l-l-mod-mod-mod-mod-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-d-mod-</s>