Accurate localization ability is fundamental in autonomous driving. Traditional visual localization frameworks approach the semantic map-matching problem with geometric models, which rely on complex parameter tuning and thus hinder large-scale deployment. In this paper, we propose BEV-Locator: an end-to-end visual semantic localization neural network using multi-view camera images. Specifically, a visual BEV (Birds-Eye-View) encoder extracts and flattens the multi-view images into BEV space. While the semantic map features are structurally embedded as map queries sequence. Then a cross-model transformer associates the BEV features and semantic map queries. The localization information of ego-car is recursively queried out by cross-attention modules. Finally, the ego pose can be inferred by decoding the transformer outputs. We evaluate the proposed method in large-scale nuScenes and Qcraft datasets. The experimental results show that the BEV-locator is capable to estimate the vehicle poses under versatile scenarios, which effectively associates the cross-model information from multi-view images and global semantic maps. The experiments report satisfactory accuracy with mean absolute errors of 0.052m, 0.135m and 0.251$^\circ$ in lateral, longitudinal translation and heading angle degree.
翻译:精密本地化能力是自主驱动的基础。 传统的视觉本地化框架将语义地图匹配问题与几何模型相近,这些模型依赖于复杂的参数调整,从而阻碍大规模部署。 在本文中, 我们提议 BEV- Locator: 一个端到端的视觉语义本地化神经网络, 使用多视图相机图像。 具体地说, 一个视觉BEV( Birds- Eye- View) 解码器提取, 将多视图图像平铺到 BEV 空间。 虽然语义地图特征在结构上嵌入了地图查询序列。 然后, 一个跨模型变异器将BEV 特征和语义地图查询联系起来。 自我车的本地化信息由交叉注意模块循环解析。 最后, 自我构成可以通过解析变异器输出来推断。 我们评估了大型 nuScenes 和 Qcraftal数据集中的拟议方法。 实验结果表明, BEV- locator 能够对车辆在可变的情景下配置进行估算。 $xlev 的图像和后方位图像的精确度分析。