The great progress of 3D object detectors relies on large-scale data and 3D annotations. The annotation cost for 3D bounding boxes is extremely expensive while the 2D ones are easier and cheaper to collect. In this paper, we introduce a hybrid training framework, enabling us to learn a visual 3D object detector with massive 2D (pseudo) labels, even without 3D annotations. To break through the information bottleneck of 2D clues, we explore a new perspective: Temporal 2D Supervision. We propose a temporal 2D transformation to bridge the 3D predictions with temporal 2D labels. Two steps, including homography wraping and 2D box deduction, are taken to transform the 3D predictions into 2D ones for supervision. Experiments conducted on the nuScenes dataset show strong results (nearly 90% of its fully-supervised performance) with only 25% 3D annotations. We hope our findings can provide new insights for using a large number of 2D annotations for 3D perception.
翻译:3D 对象探测器的巨大进步依赖于大型数据和 3D 注释。 3D 边框的注解成本非常昂贵, 而 2D 框的注解成本则非常昂贵, 而 2D 框则比较容易收集, 并且更便宜。 在本文中, 我们引入了一个混合培训框架, 使我们能够学习一个具有大型 2D (假体) 标签的视觉 3D 对象探测器。 打破 2D 线索的信息瓶颈, 我们探索一个新的视角 : Temaloral 2D 督导 。 我们提出一个时间 2D 转换, 将 3D 预测与 时间 2D 标签连接起来。 采取两个步骤, 包括 将 3D 预测转换为 2D 标签, 供监督 。 在 核星数据集上进行的实验显示强烈的结果( 几乎90% 的完全监控性能), 并且只有 25% 3D 说明。 我们希望我们的发现能够提供新的洞察到 3D 。