Exploration of indoor environments has recently experienced a significant interest, also thanks to the introduction of deep neural agents built in a hierarchical fashion and trained with Deep Reinforcement Learning (DRL) on simulated environments. Current state-of-the-art methods employ a dense extrinsic reward that requires the complete a priori knowledge of the layout of the training environment to learn an effective exploration policy. However, such information is expensive to gather in terms of time and resources. In this work, we propose to train the model with a purely intrinsic reward signal to guide exploration, which is based on the impact of the robot's actions on the environment. So far, impact-based rewards have been employed for simple tasks and in procedurally generated synthetic environments with countable states. Since the number of states observable by the agent in realistic indoor environments is non-countable, we include a neural-based density model and replace the traditional count-based regularization with an estimated pseudo-count of previously visited states. The proposed exploration approach outperforms DRL-based competitors relying on intrinsic rewards and surpasses the agents trained with a dense extrinsic reward computed with the environment layouts. We also show that a robot equipped with the proposed approach seamlessly adapts to point-goal navigation and real-world deployment.
翻译:最近对室内环境的探索引起了很大的兴趣,这也是由于引进了以等级方式建造的、在模拟环境中经过深强化学习(DRL)培训的深层神经剂,因此最近对室内环境的探索产生了浓厚的兴趣。目前最先进的方法采用密集的外表奖励,需要事先完全了解培训环境的布局,以学习有效的勘探政策。然而,从时间和资源方面收集这类信息的费用很高。在这项工作中,我们提议用一个纯粹内在的奖赏信号来培训模型来指导勘探,该信号的基础是机器人的行动对环境的影响。迄今为止,基于影响的奖赏被用于简单的任务和在程序上产生的合成环境中使用可计数的状态。由于代理人在现实的室内环境中观察到的国家数量是不可计数的,因此我们包括一个基于神经的密度模型,用先前访问过的国家的估计假计取代传统的计数规范。拟议的勘探方法比基于内在奖赏的DRL竞争者要强得多,而且超过了经过训练的代理人所接受的密集的极端奖赏的奖励,这些代理人用的是真实的、无缝的导航方法。我们还展示了一种机器人。