Agricultural robots are emerging as powerful assistants across a wide range of agricultural tasks, nevertheless, still heavily rely on manual operation or fixed rail systems for movement. The AgriVLN method and the A2A benchmark pioneeringly extend Vision-and-Language Navigation (VLN) to the agricultural domain, enabling robots to navigate to the target positions following the natural language instructions. In practical agricultural scenarios, navigation instructions often repeatedly occur, yet AgriVLN treat each instruction as an independent episode, overlooking the potential of past experiences to provide spatial context for subsequent ones. To bridge this gap, we propose the method of Spatial Understanding Memory for Agricultural Vision-and-Language Navigation (SUM-AgriVLN), in which the SUM module employs spatial understanding and save spatial memory through 3D reconstruction and representation. When evaluated on the A2A benchmark, our SUM-AgriVLN effectively improves Success Rate from 0.47 to 0.54 with slight sacrifice on Navigation Error from 2.91m to 2.93m, demonstrating the state-of-the-art performance in the agricultural domain. Code: https://github.com/AlexTraveling/SUM-AgriVLN.
翻译:农业机器人正日益成为各类农业任务中的强大助手,然而其移动仍严重依赖人工操作或固定轨道系统。AgriVLN方法与A2A基准首次将视觉语言导航(VLN)拓展至农业领域,使机器人能够依据自然语言指令导航至目标位置。在实际农业场景中,导航指令常重复出现,但AgriVLN将每条指令视为独立片段,忽视了过往经验为后续指令提供空间上下文的潜力。为弥补这一不足,我们提出面向农业视觉语言导航的空间理解记忆方法(SUM-AgriVLN),其中SUM模块通过三维重建与表征实现空间理解并存储空间记忆。在A2A基准上的评估表明,我们的SUM-AgriVLN将成功率从0.47有效提升至0.54,同时导航误差仅从2.91米微增至2.93米,展现了农业领域最先进的性能。代码:https://github.com/AlexTraveling/SUM-AgriVLN。