Detection of moving objects is a very important task in autonomous driving systems. After the perception phase, motion planning is typically performed in Bird's Eye View (BEV) space. This would require projection of objects detected on the image plane to top view BEV plane. Such a projection is prone to errors due to lack of depth information and noisy mapping in far away areas. CNNs can leverage the global context in the scene to project better. In this work, we explore end-to-end Moving Object Detection (MOD) on the BEV map directly using monocular images as input. To the best of our knowledge, such a dataset does not exist and we create an extended KITTI-raw dataset consisting of 12.9k images with annotations of moving object masks in BEV space for five classes. The dataset is intended to be used for class agnostic motion cue based object detection and classes are provided as meta-data for better tuning. We design and implement a two-stream RGB and optical flow fusion architecture which outputs motion segmentation directly in BEV space. We compare it with inverse perspective mapping of state-of-the-art motion segmentation predictions on the image plane. We observe a significant improvement of 13% in mIoU using the simple baseline implementation. This demonstrates the ability to directly learn motion segmentation output in BEV space. Qualitative results of our baseline and the dataset annotations can be found in https://sites.google.com/view/bev-modnet.
翻译:移动物体的探测是自动驱动系统中一项非常重要的任务。 在感知阶段之后, 运动规划通常在 Bird 眼视( BEV) 空间中进行。 这将需要将图像平面上检测到的物体投向BEV 平面顶部。 这种投影容易出错误, 原因是缺少深度信息和远处的热测图。 CNN可以利用现场的全球环境进行更好的投影。 在这项工作中, 我们直接用单向图像作为输入, 探索BEV 地图上的端至端移动物体探测( MOD ) 。 根据我们的知识, 这种数据集不存在, 我们创建了一个扩展的 KITTI 原始数据集, 由12.9k 图像组成, 配有BEV 空间移动掩码图的注释, 用于五级的BEVEV 空间。 该数据集将用来进行类类的感知运动感运动提示性物体探测和课程作为元数据。 我们设计并实施了双流 RGB 和光流融合结构结构, 直接在 BEV 空间中产生运动分解。 我们从相反的角度对它进行着状态的图图图图图进行对比, 。 在运动图中, 直为BLEV 方向上, 直学 直向方向图图路路路路路路路段 。 。