Up-to-date High-Definition (HD) maps are essential for self-driving cars. To achieve constantly updated HD maps, we present a deep neural network (DNN), Diff-Net, to detect changes in them. Compared to traditional methods based on object detectors, the essential design in our work is a parallel feature difference calculation structure that infers map changes by comparing features extracted from the camera and rasterized images. To generate these rasterized images, we project map elements onto images in the camera view, yielding meaningful map representations that can be consumed by a DNN accordingly. As we formulate the change detection task as an object detection problem, we leverage the anchor-based structure that predicts bounding boxes with different change status categories. Furthermore, rather than relying on single frame input, we introduce a spatio-temporal fusion module that fuses features from history frames into the current, thus improving the overall performance. Finally, we comprehensively validate our method's effectiveness using freshly collected datasets. Results demonstrate that our Diff-Net achieves better performance than the baseline methods and is ready to be integrated into a map production pipeline maintaining an up-to-date HD map.
翻译:最新的高定义(HD) 地图对于汽车自驾驶至关重要。 为了实现不断更新的HD地图, 我们提出了一个深神经网络( DNN) Diff-Net, 以探测其中的变化。 与基于天体探测器的传统方法相比, 我们工作中的基本设计是一个平行的特征差异计算结构, 通过比较从相机和光化图像中提取的特征来推断地图的变化。 为了生成这些光化图像, 我们将元素映射在相机视图中的图像上, 产生可以由 DNN 相应消耗的有意义的地图显示。 当我们把变化探测任务设计成一个目标探测问题时, 我们利用基于锚的架构来预测不同变化状态类别的框的捆绑框。 此外, 我们不是依靠单一的框架输入, 而是引入一个将历史框架的特征连接到当前, 从而改进总体性能。 最后, 我们用新收集的数据集来全面验证我们的方法的有效性, 结果表明, 我们的Diff- Net 实现了比基线方法更好的性能, 并准备整合成地图制作管道。