Long-term metric self-localization is an essential capability of autonomous mobile robots, but remains challenging for vision-based systems due to appearance changes caused by lighting, weather, or seasonal variations. While experience-based mapping has proven to be an effective technique for bridging the `appearance gap,' the number of experiences required for reliable metric localization over days or months can be very large, and methods for reducing the necessary number of experiences are needed for this approach to scale. Taking inspiration from color constancy theory, we learn a nonlinear RGB-to-grayscale mapping that explicitly maximizes the number of inlier feature matches for images captured under different lighting and weather conditions, and use it as a pre-processing step in a conventional single-experience localization pipeline to improve its robustness to appearance change. We train this mapping by approximating the target non-differentiable localization pipeline with a deep neural network, and find that incorporating a learned low-dimensional context feature can further improve cross-appearance feature matching. Using synthetic and real-world datasets, we demonstrate substantial improvements in localization performance across day-night cycles, enabling continuous metric localization over a 30-hour period using a single mapping experience, and allowing experience-based localization to scale to long deployments with dramatically reduced data requirements.
翻译:长期自我定位是自主移动机器人的基本能力,但由于照明、天气或季节性变化造成的外观变化,对基于愿景的系统仍具有挑战性。虽然经验型制图已证明是弥合“出现差距”的有效方法,但可靠基准型本地化在数日或数月中所需的大量经验可能非常庞大,而且需要减少这一方法的规模化所需经验数量的方法。我们从彩色凝固理论中汲取了非线性 RGB 至 grays规模绘图的灵感,明确将在不同照明和天气条件下采集的图像的内在特征匹配数量最大化,并将它用作常规的单一经验型本地化管道的预处理步骤,以提高其对外观变化的稳健性。我们通过对目标非差异型本地化管道进行近似化,利用深层的神经网络,并发现纳入学习的低维环境特征可以进一步改善交叉现象的匹配。使用合成和真实世界数据集,我们展示了在基于不同照明和气候和气候条件下采集的本地化业绩的大幅改进,从而能够利用连续30小时的本地化,使单一数据流缩,从而能够连续地进行本地化。