3D object detection from visual sensors is a cornerstone capability of robotic systems. State-of-the-art methods focus on reasoning and decoding object bounding boxes from multi-view camera input. In this work we gain intuition from the integral role of multi-view consistency in 3D scene understanding and geometric learning. To this end, we introduce VEDet, a novel 3D object detection framework that exploits 3D multi-view geometry to improve localization through viewpoint awareness and equivariance. VEDet leverages a query-based transformer architecture and encodes the 3D scene by augmenting image features with positional encodings from their 3D perspective geometry. We design view-conditioned queries at the output level, which enables the generation of multiple virtual frames during training to learn viewpoint equivariance by enforcing multi-view consistency. The multi-view geometry injected at the input level as positional encodings and regularized at the loss level provides rich geometric cues for 3D object detection, leading to state-of-the-art performance on the nuScenes benchmark. The code and model are made available at https://github.com/TRI-ML/VEDet.
翻译:从视觉传感器中进行的三维物体检测是机器人系统的基石能力。最先进的方法专注于从多视图相机输入中推理和解码物体边界框。在这项工作中,我们从多视图一致性在三维场景理解和几何学习中的核心作用中获得了直觉。为此,我们引入了VEDet,这是一种新颖的三维物体检测框架,它利用了三维多视图几何来通过视角感知和等变性改善定位。VEDet利用基于查询的变压器架构并使用来自它们的三维透视几何的位置编码来增强图像特征以对三维场景进行编码。我们在输出级别设计了视图条件的查询,这使得在训练期间生成多个虚拟帧,通过强制要求多视图一致性来学习视角等变性。在输入级别注入的多视图几何作为位置编码,并在损失级别上进行规范化,为三维物体检测提供了丰富的几何线索,从而在nuScenes基准测试中实现了最先进的性能。代码和模型可在https://github.com/TRI-ML/VEDet上获得。