Robust localization in a given map is a crucial component of most autonomous robots. In this paper, we address the problem of localizing in an indoor environment that changes and where prominent structures have no correspondence in the map built at a different point in time. To overcome the discrepancy between the map and the observed environment caused by such changes, we exploit human-readable localization cues to assist localization. These cues are readily available in most facilities and can be detected using RGB camera images by utilizing text spotting. We integrate these cues into a Monte Carlo localization framework using a particle filter that operates on 2D LiDAR scans and camera data. By this, we provide a robust localization solution for environments with structural changes and dynamics by humans walking. We evaluate our localization framework on multiple challenging indoor scenarios in an office environment. The experiments suggest that our approach is robust to structural changes and can run on an onboard computer. We release an open source implementation of our approach (upon paper acceptance), which uses off-the-shelf text spotting, written in C++ with a ROS wrapper.
翻译:在给定的地图中, 硬化定位是大多数自主机器人的关键组成部分。 在本文中, 我们处理在室内环境中本地化的问题, 室内环境中的变化和突出的结构在不同的时间点建立的地图上没有对应材料。 为了克服地图与观测到的环境之间的差异, 我们利用人类可读本地化提示来帮助本地化。 这些提示在大多数设施中很容易获得, 并且可以通过使用文本定位来检测 RGB 相机图像 。 我们将这些提示融入蒙特卡洛 本地化框架, 使用 2D 里达AR 扫描和相机数据操作的粒子过滤器。 这样, 我们为人行走的结构变化和动态环境提供一个强大的本地化解决方案 。 我们评估办公室环境中多重具有挑战性的室内环境的本地化框架 。 实验表明, 我们的方法对结构变化是强大的, 可以运行在机上计算机上。 我们发布了一个方法的公开源实施( 纸面接受), 使用离子文本定位, 以 ROS 包装器写入 C++ 的 C++ 。