Humans can orient themselves in their 3D environments using simple 2D maps. Differently, algorithms for visual localization mostly rely on complex 3D point clouds that are expensive to build, store, and maintain over time. We bridge this gap by introducing OrienterNet, the first deep neural network that can localize an image with sub-meter accuracy using the same 2D semantic maps that humans use. OrienterNet estimates the location and orientation of a query image by matching a neural Bird's-Eye View with open and globally available maps from OpenStreetMap, enabling anyone to localize anywhere such maps are available. OrienterNet is supervised only by camera poses but learns to perform semantic matching with a wide range of map elements in an end-to-end manner. To enable this, we introduce a large crowd-sourced dataset of images captured across 12 cities from the diverse viewpoints of cars, bikes, and pedestrians. OrienterNet generalizes to new datasets and pushes the state of the art in both robotics and AR scenarios. The code and trained model will be released publicly.
翻译:人类可以使用简单的2D地图在3D环境中进行定位。然而,可视化定位算法通常依赖于复杂的3D点云,这些点云的建立、存储和维护成本高昂。为弥合这一差距,本文提出了OrienterNet,这是第一个可以使用与人类相同的2D语义地图,在亚米级别上定位图像的深度神经网络。OrienterNet通过与开放且全球可用的OpenStreetMap地图匹配神经鸟瞰视图来估计查询图像的位置和方向,使任何人都可以在这些地图可用的任何地方进行定位。OrienterNet仅由相机姿态监督,但学习以端到端的方式执行语义匹配,适用于各种地图元素。为此,我们介绍了一个从汽车、自行车和行人的多样视角拍摄的跨越12个城市的图像的大规模众包数据集。OrienterNet可以推广到新的数据集,并推动机器人技术和增强现实方案的最新进展。代码和训练模型将会被公开发布。