In the context of visual navigation, the capacity to map a novel environment is necessary for an agent to exploit its observation history in the considered place and efficiently reach known goals. This ability can be associated with spatial reasoning, where an agent is able to perceive spatial relationships and regularities, and discover object characteristics. Recent work introduces learnable policies parametrized by deep neural networks and trained with Reinforcement Learning (RL). In classical RL setups, the capacity to map and reason spatially is learned end-to-end, from reward alone. In this setting, we introduce supplementary supervision in the form of auxiliary tasks designed to favor the emergence of spatial perception capabilities in agents trained for a goal-reaching downstream objective. We show that learning to estimate metrics quantifying the spatial relationships between an agent at a given location and a goal to reach has a high positive impact in Multi-Object Navigation settings. Our method significantly improves the performance of different baseline agents, that either build an explicit or implicit representation of the environment, even matching the performance of incomparable oracle agents taking ground-truth maps as input. A learning-based agent from the literature trained with the proposed auxiliary losses was the winning entry to the Multi-Object Navigation Challenge, part of the CVPR 2021 Embodied AI Workshop.
翻译:在视觉导航方面,必须具备绘制新环境图的能力,使代理人能够利用在所考虑的地点的观测历史,并有效地达到已知的目标。这种能力可以与空间推理相联系,使代理人能够感知空间关系和规律,并发现物体特性。最近的工作引入了可学习的政策,由深层神经网络进行分解,并受过强化学习(RL)培训。在传统的RL设置中,光靠奖励就能从空间到终端到终端地学。在这一背景下,我们以辅助任务的形式进行补充监督,目的是在受过目标影响下游目标训练的代理人中促进空间感知能力的出现。我们表明,学会如何估计某一地点的代理人之间的空间关系和所要达到的目标之间的空间关系的量化指标,在多点导航环境中具有高度的积极影响。我们的方法大大改进了不同的基线代理人的性能,即建立明确或隐含着对环境的描述,甚至与以地面图为输入点的比较或触摸底的代理人的性能相匹配。我们显示,从经过培训的文献中学习的代理人与REP II II 20号导航讲习班的进入部分。