Long-term autonomy for mobile robots requires both robust self-localization and reliable map maintenance. Conventional landmark-based methods face a fundamental trade-off between landmarks with high detectability but low distinctiveness (e.g., poles) and those with high distinctiveness but difficult stable detection (e.g., local point cloud structures). This work addresses the challenge of descriptively identifying a unique "signature" (local point cloud) by leveraging a detectable, high-precision "anchor" (like a pole). To solve this, we propose a novel canonical representation, "Pole-Image," as a hybrid method that uses poles as anchors to generate signatures from the surrounding 3D structure. Pole-Image represents a pole-like landmark and its surrounding environment, detected from a LiDAR point cloud, as a 2D polar coordinate image with the pole itself as the origin. This representation leverages the pole's nature as a high-precision reference point, explicitly encoding the "relative geometry" between the stable pole and the variable surrounding point cloud. The key advantage of pole landmarks is that "detection" is extremely easy. This ease of detection allows the robot to easily track the same pole, enabling the automatic and large-scale collection of diverse observational data (positive pairs). This data acquisition feasibility makes "Contrastive Learning (CL)" applicable. By applying CL, the model learns a viewpoint-invariant and highly discriminative descriptor. The contributions are twofold: 1) The descriptor overcomes perceptual aliasing, enabling robust self-localization. 2) The high-precision encoding enables high-sensitivity change detection, contributing to map maintenance.
翻译:暂无翻译