Learned pointcloud representations do not generalize well with an increase in distance to the sensor. For example, at a range greater than 60 meters, the sparsity of lidar pointclouds reaches to a point where even humans cannot discern object shapes from each other. However, this distance should not be considered very far for fast-moving vehicles: A vehicle can traverse 60 meters under two seconds while moving at 70 mph. For safe and robust driving automation, acute 3D object detection at these ranges is indispensable. Against this backdrop, we introduce faraway-frustum: a novel fusion strategy for detecting faraway objects. The main strategy is to depend solely on the 2D vision for recognizing object class, as object shape does not change drastically with an increase in depth, and use pointcloud data for object localization in the 3D space for faraway objects. For closer objects, we use learned pointcloud representations instead, following state-of-the-art. This strategy alleviates the main shortcoming of object detection with learned pointcloud representations. Experiments on the KITTI dataset demonstrate that our method outperforms state-of-the-art by a considerable margin for faraway object detection in bird's-eye-view and 3D. Our code is open-source and publicly available: https://github.com/dongfang-steven-yang/faraway-frustum.
翻译:例如,在距离超过60米的距离上,Lidar点的宽度达到一个甚至人类都无法辨别物体形状的高度。然而,对于快速移动的飞行器来说,这一距离不应被视为非常遥远:对于安全而稳健的驾驶自动化而言,在这些距离上检测急性三维物体是不可或缺的。在此背景下,我们引入了远方反射战略:探测远方天体的新型聚合战略。主要战略是仅仅依靠2D的视野来识别物体类别,因为物体形状不会随着深度的提高而发生急剧变化,而是使用点球数据在3D空间定位远方物体。对于较近的物体,我们使用学习的点球表来代替,而采用最先进的驾驶自动化。在这种背景下,我们引入了远方物体探测的主要短路。在 KITTI 数据设置上进行实验,以2D为识别对象类别,因为物体形状不会随着深度的扩大而发生急剧变化,而是使用点球球数据在3D空间进行定位。对于更近的物体来说,我们使用学习的点球表表表表,而采用最先进的物体探测器/远方位/远端的轨道显示。我们的方法在远处的轨道上,我们可探测/远方的轨道上的轨道上的轨道上,是远方位。