Indoor relocalization is vital for both robotic tasks like autonomous exploration and civil applications such as navigation with a cell phone in a shopping mall. Some previous approaches adopt geometrical information such as key-point features or local textures to carry out indoor relocalization, but they either easily fail in an environment with visually similar scenes or require many database images. Inspired by the fact that humans often remember places by recognizing unique landmarks, we resort to objects, which are more informative than geometry elements. In this work, we propose a simple yet effective object-based indoor relocalization approach, dubbed AirLoc. To overcome the critical challenges of object reidentification and remembering object relationships, we extract object-wise appearance embedding and inter-object geometric relationships. The geometry and appearance features are integrated to generate cumulative scene features. This results in a robust, accurate, and portable indoor relocalization system, which outperforms the state-of-the-art methods in room-level relocalization by 9.5% of PR-AUC and 7% of accuracy. In addition to exhaustive evaluation, we also carry out real-world tests, where AirLoc shows robustness in challenges like severe occlusion, perceptual aliasing, viewpoint shift, and deformation.
翻译:室内重定位对于机器人任务如自主探索以及诸如在商场中使用手机导航等民用应用程序都至关重要。一些以前的方法采用几何信息(如关键点特征或局部纹理)来进行室内重定位,但它们要么在具有视觉相似场景的环境中容易失败,要么需要很多数据库图像。受到人类通常通过认识独特地标记来记忆地方的事实的启发,我们求助于对象,它们比几何元素更具信息量。在这项工作中,我们提出了一种简单而有效的基于对象的室内重定位方法,称为 AirLoc。为了克服对象重新识别和记忆对象关系的关键挑战,我们提取对象智能外观嵌入和对象间几何关系。将几何和外观特征融合以生成累积场景特征。这导致一个强大,准确且便携的室内重定位系统,其在房间级重定位中的性能优于现有方法,增加了9.5%的 PR-AUC 和 7%的准确性。除了详尽的评估外,我们还进行了现实世界的测试,其中 AirLoc 显示出对严重遮挡、知觉别名、视角变化和变形等挑战的强大性。