Cooperation of automated vehicles (AVs) can improve safety, efficiency and comfort in traffic. Digital twins of Cooperative Intelligent Transport Systems (C-ITS) play an important role in monitoring, managing and improving traffic. Computing a live digital twin of traffic requires as input live perception data of preferably multiple connected entities such as automated vehicles (AVs). One such type of perception data are evidential occupancy grid maps (OGMs). The computation of a digital twin involves their spatiotemporal alignment and fusion. In this work, we focus on the spatial alignment, also known as registration, and fusion of evidential occupancy grid maps of multiple automated vehicles. While there exists extensive research on the synchronization and fusion of object-based environment representations, the registration and fusion of OGMs originating from multiple connected vehicles has not been investigated much. We propose a methodology that involves training a deep neural network (DNN) to predict a fused evidential OGM from two OGMs computed by different AVs. The output includes an estimate of the first- and second-order uncertainty. We demonstrate that the DNN trained with synthetic data only outperforms a baseline approach based on coordinate transformation and combination rules also on real-world data. Experimental results on synthetic data show that our approach is able to compensate for spatial misalignments of up to 5 meters and 20 degrees.
翻译:----
联合注册与证据占有格图融合,用于交通活数字孪生
自动化车辆的合作能够提高交通的安全性、效率和舒适度。交通协同智能运输系统(C-ITS)的数字孪生在监控、管理和改进交通方面发挥着重要作用。计算交通的实时数字孪生需要多个联接实体,如自动化车辆(AVs)的实时感知数据。这种感知数据之一是证据占有格地图(OGMs)。计算数字孪生涉及它们的时空对齐和融合。在本文中,我们着重研究多个自动化车辆的证据占有格地图的空间对齐,也称为注册和融合。虽然在基于对象环境表示的同步和融合方面已经进行了广泛的研究,但来自多个连接车辆的OGM的注册和融合并没有得到很多研究。我们提出了一种方法,涉及训练深度神经网络(DNN)来预测从两个不同AV计算得到的OGMs中的融合证据OGM。输出包括第一和第二阶不确定性的估计值。我们证明了仅使用合成数据训练的DNN在真实世界数据上也比基于坐标转换和组合规则的基准方法表现更好。在合成数据上的实验结果表明,我们的方法能够补偿多达5米和20度的空间误差 。