The ability to navigate like a human towards a language-guided target from anywhere in a 3D embodied environment is one of the 'holy grail' goals of intelligent robots. Most visual navigation benchmarks, however, focus on navigating toward a target from a fixed starting point, guided by an elaborate set of instructions that depicts step-by-step. This approach deviates from real-world problems in which human-only describes what the object and its surrounding look like and asks the robot to start navigation from anywhere. Accordingly, in this paper, we introduce a Scenario Oriented Object Navigation (SOON) task. In this task, an agent is required to navigate from an arbitrary position in a 3D embodied environment to localize a target following a scene description. To give a promising direction to solve this task, we propose a novel graph-based exploration (GBE) method, which models the navigation state as a graph and introduces a novel graph-based exploration approach to learn knowledge from the graph and stabilize training by learning sub-optimal trajectories. We also propose a new large-scale benchmark named From Anywhere to Object (FAO) dataset. To avoid target ambiguity, the descriptions in FAO provide rich semantic scene information includes: object attribute, object relationship, region description, and nearby region description. Our experiments reveal that the proposed GBE outperforms various state-of-the-arts on both FAO and R2R datasets. And the ablation studies on FAO validates the quality of the dataset.
翻译:在3D所体现的环境中,像人一样从任何地方向语言引导目标导航的能力,是智能机器人的“极弱”目标之一。但是,大多数视觉导航基准都侧重于从固定的起点向目标导航,以一套精心设计的描述逐步发展的指南为指导。这种方法不同于现实世界问题,因为只有人才能描述对象及其周围的外形,并要求机器人从任何地方开始导航。因此,我们在本文件中引入了一个“面向对象导航(SOON)”的任务。在这一任务中,需要用一个代理人从3D的任意质量位置向3D所体现的环境中导航,在场景描述之后将一个目标本地化。为解决这一问题,我们提出了一个很有希望的方向,我们提出了一个新的基于图表的探索(GBE)方法,将导航状态建成图表,并引入一个新的基于图表的探索方法,以便从图表中获取知识,并通过学习亚性轨轨图来稳定培训。我们还提出了一个新的大型基准,名为“从Phork”到目标(FAO),在3D的外观环境中任意定位,将一个目标定位定位环境定位定位,我们提出了一个目标的图像描述,包括了我们所处的图像区域。