In this work, we present a unified framework for multi-modality 3D object detection, named UVTR. The proposed method aims to unify multi-modality representations in the voxel space for accurate and robust single- or cross-modality 3D detection. To this end, the modality-specific space is first designed to represent different inputs in the voxel feature space. Different from previous work, our approach preserves the voxel space without height compression to alleviate semantic ambiguity and enable spatial interactions. Benefit from the unified manner, cross-modality interaction is then proposed to make full use of inherent properties from different sensors, including knowledge transfer and modality fusion. In this way, geometry-aware expressions in point clouds and context-rich features in images are well utilized for better performance and robustness. The transformer decoder is applied to efficiently sample features from the unified space with learnable positions, which facilitates object-level interactions. In general, UVTR presents an early attempt to represent different modalities in a unified framework. It surpasses previous work in single- and multi-modality entries and achieves leading performance in the nuScenes test set with 69.7%, 55.1%, and 71.1% NDS for LiDAR, camera, and multi-modality inputs, respectively. Code is made available at https://github.com/dvlab-research/UVTR.
翻译:在这项工作中,我们提出了一个名为UVTR的多式3D物体探测统一框架。拟议方法的目的是统一 voxel空间的多式表示方式,以便准确和稳健的单式或跨式3D探测。为此,模式特定空间首先设计为代表 voxel 特征空间的不同投入。不同于以往的工作,我们的方法保存了无高度压缩的 voxel 空间,以缓解语义模糊性,并允许空间互动。从统一方式中受益,然后提议跨式互动,以充分利用不同传感器的固有属性,包括知识转移和模式融合。在这种方式中,点云和图像中富于背景的表达方式被很好地用于更好的性能和稳健性。变异器用于来自统一空间的有效样本特征,具有可学习的位置,有利于物体层面的互动。一般而言, UVTR提出在统一框架内代表不同模式的早期尝试。它超越了以前在单式和多式TR-模式条目上的工作,包括知识转移和模式融合。在点云和图像中具有几何体觉觉觉的表达方式,55-NDA标准、55-numod serv 测试了69-r-dal_%Sexexexexex-dal 和制成为可操作。