In this work, we present a unified framework for multi-modality 3D object detection, named UVTR. The proposed method aims to unify multi-modality representations in the voxel space for accurate and robust single- or cross-modality 3D detection. To this end, the modality-specific space is first designed to represent different inputs in the voxel feature space. Different from previous work, our approach preserves the voxel space without height compression to alleviate semantic ambiguity and enable spatial connections. To make full use of the inputs from different sensors, the cross-modality interaction is then proposed, including knowledge transfer and modality fusion. In this way, geometry-aware expressions in point clouds and context-rich features in images are well utilized for better performance and robustness. The transformer decoder is applied to efficiently sample features from the unified space with learnable positions, which facilitates object-level interactions. In general, UVTR presents an early attempt to represent different modalities in a unified framework. It surpasses previous work in single- or multi-modality entries. The proposed method achieves leading performance in the nuScenes test set for both object detection and the following object tracking task. Code is made publicly available at https://github.com/dvlab-research/UVTR.
翻译:在这项工作中,我们提出了一个名为UVTR的多式3D物体探测统一框架。拟议方法旨在统一 voxel 空间的多式表达方式,以便准确和稳健的单式或跨式3D探测。为此,模式特定空间首先设计为代表 voxel 特征空间的不同投入。不同于以往的工作,我们的方法保存了无高度压缩的 voxel 空间,以缓解语义上的模糊性,并允许空间连接。为了充分利用不同传感器的投入,然后提出了跨式互动,包括知识转移和模式融合。这样,点云和图像中环境丰富的特征的几何测量性能表达方式被很好地用于更好的性能和稳健性。变异器用于来自统一空间的高效样本特征,具有可学习的位置,便利了物体层面的互动。一般来说,UVTRT为在统一框架内代表不同模式的早期尝试提供了一种尝试。它超过了以往在单式或多式模式条目上的工作。拟议方法在目标检测/Scenbra中都实现了可公开跟踪的状态。