LiDAR-based 3D object detection is of paramount importance for autonomous driving. Recent trends show a remarkable improvement for bird's-eye-view (BEV) based and point-based methods as they demonstrate superior performance compared to range-view counterparts. This paper presents an insight that leverages range-view representation to enhance 3D points for accurate 3D object detection. Specifically, we introduce a Redemption from Range-view Module (R2M), a plug-and-play approach for 3D surface texture enhancement from the 2D range view to the 3D point view. R2M comprises BasicBlock for 2D feature extraction, Hierarchical-dilated (HD) Meta Kernel for expanding the 3D receptive field, and Feature Points Redemption (FPR) for recovering 3D surface texture information. R2M can be seamlessly integrated into state-of-the-art LiDAR-based 3D object detectors as preprocessing and achieve appealing improvement, e.g., 1.39%, 1.67%, and 1.97% mAP improvement on easy, moderate, and hard difficulty level of KITTI val set, respectively. Based on R2M, we further propose R2Detector (R2Det) with the Synchronous-Grid RoI Pooling for accurate box refinement. R2Det outperforms existing range-view-based methods by a significant margin on both the KITTI benchmark and the Waymo Open Dataset. Codes will be made publicly available.
翻译:暂无翻译