The vision-based perception for autonomous driving has undergone a transformation from the bird-eye-view (BEV) representations to the 3D semantic occupancy. Compared with the BEV planes, the 3D semantic occupancy further provides structural information along the vertical direction. This paper presents OccFormer, a dual-path transformer network to effectively process the 3D volume for semantic occupancy prediction. OccFormer achieves a long-range, dynamic, and efficient encoding of the camera-generated 3D voxel features. It is obtained by decomposing the heavy 3D processing into the local and global transformer pathways along the horizontal plane. For the occupancy decoder, we adapt the vanilla Mask2Former for 3D semantic occupancy by proposing preserve-pooling and class-guided sampling, which notably mitigate the sparsity and class imbalance. Experimental results demonstrate that OccFormer significantly outperforms existing methods for semantic scene completion on SemanticKITTI dataset and for LiDAR semantic segmentation on nuScenes dataset. Code is available at \url{https://github.com/zhangyp15/OccFormer}.
翻译:视觉主导的自动驾驶感知已经从鸟瞰图(BEV)表示转变为三维语义占据。与BEV平面相比,三维语义占据更进一步提供了沿垂直方向的结构信息。本文提出了OccFormer,一种双通道Transformer网络,用于有效处理三维体积进行语义占据预测。 OccFormer通过将重型的三维处理分解为水平平面上的本地和全局Transformer路径,实现对由相机生成的三维体素特征的长程、动态和高效编码。对于占据解码器,我们通过提出保留池化和类别引导采样来适应棕榈蜂蜜占据的香草Mask2Former,显著减轻了稀疏性和类别不平衡问题。实验结果表明,OccFormer在SemanticKITTI数据集中的语义场景补全和在nuScenes数据集上的LiDAR语义分割中,显着优于现有方法。 代码可在\url{https://github.com/zhangyp15/OccFormer}中获取。