Visual-based 3D semantic occupancy perception is a key technology for robotics, including autonomous vehicles, offering an enhanced understanding of the environment by 3D. This approach, however, typically requires more computational resources than BEV or 2D methods. We propose a novel 3D semantic occupancy perception method, OccupancyDETR, which utilizes a DETR-like object detection, a mixed dense-sparse 3D occupancy decoder. Our approach distinguishes between foreground and background within a scene. Initially, foreground objects are detected using the DETR-like object detection. Subsequently, queries for both foreground and background objects are fed into the mixed dense-sparse 3D occupancy decoder, performing upsampling in dense and sparse methods, respectively. Finally, a MaskFormer is utilized to infer the semantics of the background voxels. Our approach strikes a balance between efficiency and accuracy, achieving faster inference times, lower resource consumption, and improved performance for small object detection. We demonstrate the effectiveness of our proposed method on the SemanticKITTI dataset, showcasing an mIoU of 14 and a processing speed of 10 FPS, thereby presenting a promising solution for real-time 3D semantic occupancy perception.
翻译:暂无翻译