Loop Closure Detection (LCD) is an essential component of visual simultaneous localization and mapping (SLAM) systems. It enables the recognition of previously visited scenes to eliminate pose and map estimate drifts arising from long-term exploration. However, current appearance-based LCD methods face significant challenges, including high computational costs, viewpoint variance, and dynamic objects in scenes. This paper introduces an online based on Superpixel Grids (SGs) LCD approach, SGIDN-LCD, to find similarities between scenes via hand-crafted features extracted from SGs. Unlike traditional Bag-of-Words (BoW) models requiring pre-training, we propose an adaptive mechanism to group similar images called $\textbf{\textit{dynamic}}$ $\textbf{\textit{node}}$, which incremental adjusts the database in an online manner, allowing for efficient retrieval of previously viewed images. Experimental results demonstrate the SGIDN-LCD significantly improving LCD precision-recall and efficiency. Moreover, our proposed overall LCD method outperforms state-of-the-art approaches on multiple typical datasets.
翻译:Loop Closure Detection(LCD)是视觉同时定位和地图构建系统中的一个重要组成部分。它能够识别以前访问的场景,以消除长期探索导致的姿态和地图估计漂移。但是,当前的基于外观的LCD方法面临着高计算成本、视角变化和场景中的动态物体等重大挑战。本文介绍了一种基于超像素网格(SGs)的在线LCD方法,SGIDN-LCD,通过从SGs中提取出手工制作的特征来查找场景之间的相似之处。与传统的需进行预训练的BoW模型不同,我们提出了一种自适应机制,将相似的图像分组成“增量动态节点”,以在线方式逐步调整数据库,从而实现对先前查看的图像的有效检索。实验结果表明,SGIDN-LCD显著提高了LCD的精度-召回率和效率,并且我们提出的整体LCD方法在多个典型数据集上优于现有最先进的方法。