Fusing LiDAR and camera information is essential for achieving accurate and reliable 3D object detection in autonomous driving systems. This is challenging due to the difficulty of combining multi-granularity geometric and semantic features from two drastically different modalities. Recent approaches aim at exploring the semantic densities of camera features through lifting points in 2D camera images (referred to as seeds) into 3D space, and then incorporate 2D semantics via cross-modal interaction or fusion techniques. However, depth information is under-investigated in these approaches when lifting points into 3D space, thus 2D semantics can not be reliably fused with 3D points. Moreover, their multi-modal fusion strategy, which is implemented as concatenation or attention, either can not effectively fuse 2D and 3D information or is unable to perform fine-grained interactions in the voxel space. To this end, we propose a novel framework with better utilization of the depth information and fine-grained cross-modal interaction between LiDAR and camera, which consists of two important components. First, a Multi-Depth Unprojection (MDU) method with depth-aware designs is used to enhance the depth quality of the lifted points at each interaction level. Second, a Gated Modality-Aware Convolution (GMA-Conv) block is applied to modulate voxels involved with the camera modality in a fine-grained manner and then aggregate multi-modal features into a unified space. Together they provide the detection head with more comprehensive features from LiDAR and camera. On the nuScenes test benchmark, our proposed method, abbreviated as MSMDFusion, achieves state-of-the-art 3D object detection results with 71.5% mAP and 74.0% NDS, and strong tracking results with 74.0% AMOTA without using test-time-augmentation and ensemble techniques.
翻译:在自动驱动系统中,使用激光成像仪和相机信息对于实现准确和可靠的三维天体探测是不可或缺的。这具有挑战性,因为很难将两种截然不同的模式的多光度几何和语义特征结合起来。最近的方法旨在探索摄像特征的语义密度,通过2D摄像图像(称为种子)中的升起点(称为种子)进入3D空间,然后通过跨模式互动或聚合技术将2D语义包含在3D空间中。然而,当将点升到3D空间时,这些方法的深度信息调查不足,因此,2D语义无法与3D点相可靠地结合。此外,它们的多调调调调调调战略,作为调合或关注,要么无法有效地连接2D和3D图像(称为种子)中的摄像头,要么无法在3D图像空间中进行精细化互动。为此,我们提出了一个新的框架,利用深度信息的深度信息以及精密的多光调多调的多调的相机和相机,这包括两个重要部件。首先,多调-DR-D-D-D-D-D-D-D-D-dreal-de-de-de-de-de-mod-de-de-de-de-de-de-de-de-de-de-de-de-de-de-deal-deal-de-de-de-de-de-de-de-a-a-li-de-li-la-la-la-li-li-li-li-li-li-li-li-li-li-li-li-li-li-li-li-li-li-li-li-li-li-li-li-li-d-d-d-d-li-li-li-li-li-li-li-li-li-li-li-li-li-li-li-li-li-li-li-li-li-li-li-li-li-li-li-li-li-li-li-li-d-d-d-d-lad-li-li-li-li-li-li-li-li-li-li-li-li-li-li-li-li-li-li-li-li-li-li-li-