Monocular 3D object detection is of great significance for autonomous driving but remains challenging. The core challenge is to predict the distance of objects in the absence of explicit depth information. Unlike regressing the distance as a single variable in most existing methods, we propose a novel geometry-based distance decomposition to recover the distance by its factors. The decomposition factors the distance of objects into the most representative and stable variables, i.e. the physical height and the projected visual height in the image plane. Moreover, the decomposition maintains the self-consistency between the two heights, leading to the robust distance prediction when both predicted heights are inaccurate. The decomposition also enables us to trace the cause of the distance uncertainty for different scenarios. Such decomposition makes the distance prediction interpretable, accurate, and robust. Our method directly predicts 3D bounding boxes from RGB images with a compact architecture, making the training and inference simple and efficient. The experimental results show that our method achieves the state-of-the-art performance on the monocular 3D Object detection and Birds Eye View tasks on the KITTI dataset, and can generalize to images with different camera intrinsics.
翻译:单体 3D 对象探测对于自主驾驶非常重要,但仍然具有挑战性。核心挑战是在没有清晰深度信息的情况下预测物体的距离。与在大多数现有方法中将距离作为单一变量退缩不同,我们建议采用基于几何的距离分解法,以恢复距离。分解因素将物体的距离与最具有代表性和稳定性的变量(即物理高度和图像平面的预测视觉高度)的距离联系起来。此外,分解维持了两个高度之间的自我一致性,导致在两种预测高度不准确时进行稳健的距离预测。分解还使我们能够追踪不同情景中距离不确定性的原因。这种分解使距离预测可以解释、准确和稳健健。我们的方法直接预测了3D框与具有紧凑结构的 RGB 图像的连接,使培训和推断简单而有效。实验结果显示,我们的方法在单体 3D 物体探测和Bird 眼睛观察任务上达到了状态的性能。KITTI 数据设置上,能够用不同图像的内置。