Vision-language navigation is a task that requires an agent to follow instructions to navigate in environments. It becomes increasingly crucial in the field of embodied AI, with potential applications in autonomous navigation, search and rescue, and human-robot interaction. In this paper, we propose to address a more practical yet challenging counterpart setting - vision-language navigation in continuous environments (VLN-CE). To develop a robust VLN-CE agent, we propose a new navigation framework, ETPNav, which focuses on two critical skills: 1) the capability to abstract environments and generate long-range navigation plans, and 2) the ability of obstacle-avoiding control in continuous environments. ETPNav performs online topological mapping of environments by self-organizing predicted waypoints along a traversed path, without prior environmental experience. It privileges the agent to break down the navigation procedure into high-level planning and low-level control. Concurrently, ETPNav utilizes a transformer-based cross-modal planner to generate navigation plans based on topological maps and instructions. The plan is then performed through an obstacle-avoiding controller that leverages a trial-and-error heuristic to prevent navigation from getting stuck in obstacles. Experimental results demonstrate the effectiveness of the proposed method. ETPNav yields more than 10% and 20% improvements over prior state-of-the-art on R2R-CE and RxR-CE datasets, respectively. Our code is available at https://github.com/MarSaKi/ETPNav.
翻译:视觉语言导航是一项需要智能体遵循指示在环境中进行导航的任务。在体验式AI领域中越来越重要,有望应用于自主导航,搜救和人机交互等领域。本文提出了一个更加实用但具有挑战性的对应设置视觉语言导航的系统-连续环境下(VLN-CE)。为了开发强大的VLN-CE智能体,我们提出了一个新的导航框架-ETPNav,该框架专注于两个关键技能:1)能够对环境进行抽象并生成长距离导航计划,2)在连续环境中避免障碍物的控制能力。ETPNav通过自组织预测的路径点沿行进路径进行在线拓扑映射而不需要先前的环境经验。它能够将导航过程分解为高水平规划和低级别控制。同时,ETPNav利用一个基于Transformer的跨模态规划器,根据拓扑地图和指令生成导航计划。计划随后通过一种避免障碍物的控制器执行,该控制器利用试错启发式方法防止导航被卡在障碍物中。实验结果证明了所提出方法的有效性。ETPNav比R2R-CE和RxR-CE数据集上的先前最先进技术分别提高了超过10%和20%。我们的代码可在https://github.com/MarSaKi/ETPNav上找到。