Object visual navigation aims to steer an agent towards a target object based on visual observations of the agent. It is highly desirable to reasonably perceive the environment and accurately control the agent. In the navigation task, we introduce an Agent-Centric Relation Graph (ACRG) for learning the visual representation based on the relationships in the environment. ACRG is a highly effective and reasonable structure that consists of two relationships, i.e., the relationship among objects and the relationship between the agent and the target. On the one hand, we design the Object Horizontal Relationship Graph (OHRG) that stores the relative horizontal location among objects. Note that the vertical relationship is not involved in OHRG, and we argue that OHRG is suitable for the control strategy. On the other hand, we propose the Agent-Target Depth Relationship Graph (ATDRG) that enables the agent to perceive the distance to the target. To achieve ATDRG, we utilize image depth to represent the distance. Given the above relationships, the agent can perceive the environment and output navigation actions. Given the visual representations constructed by ACRG and position-encoded global features, the agent can capture the target position to perform navigation actions. Experimental results in the artificial environment AI2-Thor demonstrate that ACRG significantly outperforms other state-of-the-art methods in unseen testing environments.
翻译:对象直观导航的目的是根据对物剂的视觉观察将物剂引向一个目标对象。 非常可取的做法是合理看待环境并准确地控制物剂。 在导航任务中,我们引入了Agent-Centric Relation Graph(ACRG),以根据环境关系来学习视觉表现。ACRG是一个非常有效和合理的结构,由两种关系组成,即物体之间的关系以及物剂和目标之间的关系。一方面,我们设计了在物体之间储存相对水平位置的物体水平关系图(OHRG),注意到垂直关系与OHRG无关,我们争论说OHRG适合控制战略。另一方面,我们建议使用Agrent-Target 深度关系图(ATDRG),使物剂能够感知目标距离。为了实现ATDRG,我们使用图像深度代表距离。鉴于上述关系,我们设计了将环境与输出导航动作。根据ACRG和位置编码的全球特性所构建的视觉表现方式,该物剂可以捕捉到目标位置位置定位定位定位定位定位定位定位定位,从而控制战略。另一方面,我们提议ATRGRG(A-ROG) 实验环境进行大规模导航动作试验的其他方法。