Monocular 3D object detection is an important task for autonomous driving considering its advantage of low cost. It is much more challenging than conventional 2D cases due to its inherent ill-posed property, which is mainly reflected in the lack of depth information. Recent progress on 2D detection offers opportunities to better solving this problem. However, it is non-trivial to make a general adapted 2D detector work in this 3D task. In this paper, we study this problem with a practice built on a fully convolutional single-stage detector and propose a general framework FCOS3D. Specifically, we first transform the commonly defined 7-DoF 3D targets to the image domain and decouple them as 2D and 3D attributes. Then the objects are distributed to different feature levels with consideration of their 2D scales and assigned only according to the projected 3D-center for the training procedure. Furthermore, the center-ness is redefined with a 2D Gaussian distribution based on the 3D-center to fit the 3D target formulation. All of these make this framework simple yet effective, getting rid of any 2D detection or 2D-3D correspondence priors. Our solution achieves 1st place out of all the vision-only methods in the nuScenes 3D detection challenge of NeurIPS 2020. Code and models are released at https://github.com/open-mmlab/mmdetection3d.
翻译:3D 对象探测是自主驾驶的一项重要任务,考虑到其低成本的优势。 与常规的 2D 对象相比,它比常规的 2D 对象探测更具有挑战性, 因为它的固有属性固有, 主要表现为缺乏深度信息。 2D 探测的最近进展为更好地解决这一问题提供了机会。 然而, 使3D 任务中的2D 探测器工作得到普遍适用的 2D 探测器工作是非边际的。 此外, 在本文中, 我们研究这一问题时, 采用完全以全共单级单级探测器为基础的做法, 并提议一个通用框架 FCOS3D。 具体地说, 我们首先将通常定义的 7- DoF 3D 目标转换为图像域, 并把它们分离成 2D 3D 属性。 然后, 对象分布在不同特性级别, 考虑它们的 2D 尺度, 并且只按照培训程序的预测 3D 中心点 。 此外, 中心范围被重新定义为 2D 高斯 分布, 3D 目标配置。 所有这些框架都使这个框架变得简单有效, 消除了任何 2D 的检测 或 2D 3D 3M 3M 之前的代码 的代码 格式 。 我们的代码 3M 的 的 的 3M 3M 3M 解码 和S 的解码 3M 3M 的解码 。