State-of-the-art visual place recognition performance is currently being achieved utilizing deep learning based approaches. Despite the recent efforts in designing lightweight convolutional neural network based models, these can still be too expensive for the most hardware restricted robot applications. Low-overhead VPR techniques would not only enable platforms equipped with low-end, cheap hardware but also reduce computation on more powerful systems, allowing these resources to be allocated for other navigation tasks. In this work, our goal is to provide an algorithm of extreme compactness and efficiency while achieving state-of-the-art robustness to appearance changes and small point-of-view variations. Our first contribution is DrosoNet, an exceptionally compact model inspired by the odor processing abilities of the fruit fly, Drosophyla melanogaster. Our second and main contribution is a voting mechanism that leverages multiple small and efficient classifiers to achieve more robust and consistent VPR compared to a single one. We use DrosoNet as the baseline classifier for the voting mechanism and evaluate our models on five benchmark datasets, assessing moderate to extreme appearance changes and small to moderate viewpoint variations. We then compare the proposed algorithms to state-of-the-art methods, both in terms of precision-recall AUC results and computational efficiency.
翻译:尽管最近在设计轻量级进化神经网络模型方面做出了努力,但这些模型对于大多数硬件受限制的机器人应用来说仍然可能过于昂贵。 低管VPR技术不仅能够使配备低端、廉价硬件的平台能够使用低端VPR,而且能够减少对更强大的系统的计算,从而能够将这些资源分配给其他导航任务。在这项工作中,我们的目标是提供极端紧凑和效率的算法,同时实现对外观变化和微小观点变异的最先进的稳健性。我们的第一个贡献是DrosoNet,这是受果蝇Drosophyla melanogaster的气味处理能力启发的一个特别紧凑的模式。我们的第二个和主要贡献是一个投票机制,它利用多个小型和高效的分类器来使VPR与其他导航任务相比更加稳健和一致。我们利用DrosoNet作为投票机制的基准分类器,并评估我们在五个基准数据集方面的模型,评估中度至极端的外观变化和小到中度观点变异。我们然后将拟议的精度计算方法与州的计算结果方法进行比较。