Visual place recognition has received increasing attention in recent years as a key technology in autonomous driving and robotics. The current mainstream approaches use either the perspective view retrieval perspective view (P2P) paradigm or the equirectangular image retrieval equirectangular image (E2E) paradigm. However, a natural and practical idea is that users only have consumer-grade pinhole cameras to obtain query perspective images and retrieve them in panoramic database images from map providers. To this end, we propose PanoVPR, a sliding-window-based perspective-to-equirectangular (P2E) visual place recognition framework, which eliminates feature truncation caused by hard cropping by sliding windows over the whole equirectangular image and computing and comparing feature descriptors between windows. In addition, this unified framework allows for directly transferring the network structure used in perspective-to-perspective (P2P) methods without modification. To facilitate training and evaluation, we derive the pitts250k-P2E dataset from the pitts250k and achieve promising results, and we also establish a P2E dataset in a real-world scenario by a mobile robot platform, which we refer to YQ360. Code and datasets will be made available at https://github.com/zafirshi/PanoVPR.
翻译:视觉地点识别近年来受到越来越多的关注,作为自主驾驶和机器人技术的关键技术。当前主流方法使用透视图检索透视图 (P2P) 范例或全景图像检索全景图像 (E2E) 范例。然而,一个自然而实用的想法是,用户仅拥有消费级针孔相机来获取查询透视图像,并在地图提供商的全景数据库图像中检索它们。为此,我们提出了 PanoVPR,一种基于滑动窗口的透视图到全景图像 (P2E) 视觉地点识别框架,它通过在整个全景图像上滑动窗口并计算和比较窗口之间的特征描述符,消除了由硬截取引起的特征截断。此外,这个统一的框架允许直接传输用于透视图到透视图 (P2P) 方法的网络结构而不需要修改。为了方便训练和评估,我们从 pitts250k 派生出pitts250k-P2E 数据集,并取得了有希望的结果,我们还通过移动机器人平台建立了一个实际环境中的 P2E 数据集,我们称之为 YQ360。代码和数据集将在https://github.com/zafirshi/PanoVPR 上提供。