LiDAR-produced point clouds are the major source for most state-of-the-art 3D object detectors. Yet, small, distant, and incomplete objects with sparse or few points are often hard to detect. We present Sparse2Dense, a new framework to efficiently boost 3D detection performance by learning to densify point clouds in latent space. Specifically, we first train a dense point 3D detector (DDet) with a dense point cloud as input and design a sparse point 3D detector (SDet) with a regular point cloud as input. Importantly, we formulate the lightweight plug-in S2D module and the point cloud reconstruction module in SDet to densify 3D features and train SDet to produce 3D features, following the dense 3D features in DDet. So, in inference, SDet can simulate dense 3D features from regular (sparse) point cloud inputs without requiring dense inputs. We evaluate our method on the large-scale Waymo Open Dataset and the Waymo Domain Adaptation Dataset, showing its high performance and efficiency over the state of the arts.
翻译:LIDAR 生成的点云是大多数最先进的 3D 对象探测器的主要来源。 然而, 最小、 遥远和不完全的、 稀少或很少点的天体往往很难探测。 我们展示了 Sprass2Dense, 这是通过学习将隐蔽空间的点云密度化来有效提高 3D 探测性能的新框架。 具体地说, 我们首先将一个密度点 3D 探测器( diet) 培训成一个密度点 3D, 作为输入, 并设计一个稀疏点 3D 探测器( SDet), 以普通点云为输入。 重要的是, 我们开发了SDDet 的轻量点 S2D 插件和点云重建模块, 以强化 3D 特性, 并训练SDet 生成 3D 特性。 因此, SDet 可以推断, 我们从常规的( sparse) 点云输入中模拟密度 3D 3D 特性, 而不需要密度输入。 我们评估了 大型 Waymo Open Dat set 和 Waymo Domain 数据集的方法, 显示其高性表现和效率。