We introduce a framework for multi-camera 3D object detection. In contrast to existing works, which estimate 3D bounding boxes directly from monocular images or use depth prediction networks to generate input for 3D object detection from 2D information, our method manipulates predictions directly in 3D space. Our architecture extracts 2D features from multiple camera images and then uses a sparse set of 3D object queries to index into these 2D features, linking 3D positions to multi-view images using camera transformation matrices. Finally, our model makes a bounding box prediction per object query, using a set-to-set loss to measure the discrepancy between the ground-truth and the prediction. This top-down approach outperforms its bottom-up counterpart in which object bounding box prediction follows per-pixel depth estimation, since it does not suffer from the compounding error introduced by a depth prediction model. Moreover, our method does not require post-processing such as non-maximum suppression, dramatically improving inference speed. We achieve state-of-the-art performance on the nuScenes autonomous driving benchmark.
翻译:我们引入了多相机 3D 对象探测框架。 与现有工程相比, 3D 框直接从单镜图像中估算, 或利用深度预测网络从 2D 信息中生成3D 对象探测输入数据, 我们的方法直接在 3D 空间中操纵预测。 我们的建筑从多个相机图像中提取 2D 特性, 然后使用一套稀有的 3D 对象查询来将3D 位置与使用摄像转换矩阵的多视图图像进行索引。 最后, 我们的模型对每个对象查询进行捆绑盒预测, 使用设定到设定的损失来测量地面轨迹与预测之间的差异 。 这种自上而下的方法优于其自下而上对应的方法, 即对象捆绑盒预测遵循了每像素深度估计, 因为它没有受到深度预测模型引入的复合错误的影响 。 此外, 我们的方法不需要后处理, 如非最大抑制, 大幅提高推断速度 。 我们实现了 NScenes 自动驾驶基准的状态性能 。