We propose a novel method to reliably estimate the pose of a camera given a sequence of images acquired in extreme environments such as deep seas or extraterrestrial terrains. Data acquired under these challenging conditions are corrupted by textureless surfaces, image degradation, and presence of repetitive and highly ambiguous structures. When naively deployed, the state-of-the-art methods can fail in those scenarios as confirmed by our empirical analysis. In this paper, we attempt to make camera relocalization work in these extreme situations. To this end, we propose: (i) a hierarchical localization system, where we leverage temporal information and (ii) a novel environment-aware image enhancement method to boost the robustness and accuracy. Our extensive experimental results demonstrate superior performance in favor of our method under two extreme settings: localizing an autonomous underwater vehicle and localizing a planetary rover in a Mars-like desert. In addition, our method achieves comparable performance with state-of-the-art methods on the indoor benchmark (7-Scenes dataset) using only 20% training data.
翻译:我们提出了一个新颖的方法来可靠地估计摄像头的外貌,根据在深海或外星地形等极端环境中获得的一系列图像,对摄像头的形状进行可靠的估计。在这种富有挑战性条件下获得的数据被无纹理的表面、图像退化以及重复性和高度模糊的结构所腐蚀。在天真地部署时,最先进的方法在这些情景中可能会失败,正如我们的经验分析所证实的那样。在本文件中,我们试图使摄像头在这些极端情况下重新定位。为此,我们提议:(一)一个等级分级的本地化系统,我们利用时间信息,和(二)一种新的环境认知图像增强方法来增强坚固性和准确性。我们广泛的实验结果显示,在两种极端环境中,我们的方法表现优异:在一种自主的水下飞行器上定位,在类似火星的沙漠中将行星环绕定位。此外,我们的方法仅使用20%的培训数据,在室内基准(7-Scenes数据集)上以最先进的方法取得类似的性表现。