Representations are crucial for a robot to learn effective navigation policies. Recent work has shown that mid-level perceptual abstractions, such as depth estimates or 2D semantic segmentation, lead to more effective policies when provided as observations in place of raw sensor data (e.g., RGB images). However, such policies must still learn latent three-dimensional scene properties from mid-level abstractions. In contrast, high-level, hierarchical representations such as 3D scene graphs explicitly provide a scene's geometry, topology, and semantics, making them compelling representations for navigation. In this work, we present a reinforcement learning framework that leverages high-level hierarchical representations to learn navigation policies. Towards this goal, we propose a graph neural network architecture and show how to embed a 3D scene graph into an agent-centric feature space, which enables the robot to learn policies for low-level action in an end-to-end manner. For each node in the scene graph, our method uses features that capture occupancy and semantic content, while explicitly retaining memory of the robot trajectory. We demonstrate the effectiveness of our method against commonly used visuomotor policies in a challenging object search task. These experiments and supporting ablation studies show that our method leads to more effective object search behaviors, exhibits improved long-term memory, and successfully leverages hierarchical information to guide its navigation objectives.
翻译:对机器人来说,代表对于学习有效的导航政策至关重要。最近的工作表明,中等层次的感知抽象,如深度估计或2D语义分解等,当作为原始传感器数据(例如RGB图像)的观测提供时,导致更有效的政策。然而,这些政策必须仍然从中层抽象学入潜潜含的三维场景特性。相比之下,3D场景图等高层、等级表层表层明确提供了场景的几何、地形学和语义学,使它们具有令人信服的导航表现。在这项工作中,我们提出了一个强化学习框架,利用高层等级代表来学习导航政策。为实现这一目标,我们提议了一个图形神经网络结构,并展示如何将3D场景图嵌入一个代理人中心特征空间,使机器人能够以端到端的方式学习关于低层行动的政策。对于场景图中的每一个节点,我们的方法都使用了捕捉占用和语义内容的特征,同时明确保留了对机器人轨迹的记忆。为了实现这一目标,我们展示了我们的方法的有效性,以对抗通常使用的高级级表达式导航的政策,我们用来进行长期的搜索,从而展示一个具有挑战性的研究。