Effective monitoring of underwater ecosystems is crucial for tracking environmental changes, guiding conservation efforts, and ensuring long-term ecosystem health. However, automating underwater ecosystem management with robotic platforms remains challenging due to the complexities of underwater imagery, which pose significant difficulties for traditional visual localization methods. We propose an integrated pipeline that combines Visual Place Recognition (VPR), feature matching, and image segmentation on video-derived images. This method enables robust identification of revisited areas, estimation of rigid transformations, and downstream analysis of ecosystem changes. Furthermore, we introduce the SQUIDLE+ VPR Benchmark-the first large-scale underwater VPR benchmark designed to leverage an extensive collection of unstructured data from multiple robotic platforms, spanning time intervals from days to years. The dataset encompasses diverse trajectories, arbitrary overlap and diverse seafloor types captured under varying environmental conditions, including differences in depth, lighting, and turbidity. Our code is available at: https://github.com/bev-gorry/underloc
翻译:对水下生态系统进行有效监测对于追踪环境变化、指导保护工作以及确保生态系统长期健康至关重要。然而,由于水下图像的复杂性给传统视觉定位方法带来了显著困难,利用机器人平台实现水下生态系统管理的自动化仍面临挑战。我们提出了一种集成流程,该流程结合了视觉位置识别、特征匹配和视频衍生图像的分割技术。该方法能够实现对重访区域的鲁棒识别、刚性变换估计以及生态系统变化的下游分析。此外,我们引入了SQUIDLE+ VPR基准测试——首个大规模水下VPR基准,旨在利用来自多个机器人平台、时间跨度从数天到数年的广泛非结构化数据。该数据集涵盖了不同环境条件下(包括深度、光照和浊度差异)捕获的多样化轨迹、任意重叠区域及多种海底类型。我们的代码可在以下网址获取:https://github.com/bev-gorry/underloc