Exploration of indoor environments has recently experienced a significant interest, also thanks to the introduction of deep neural agents built in a hierarchical fashion and trained with Deep Reinforcement Learning (DRL) on simulated environments. Current state-of-the-art methods employ a dense extrinsic reward that requires the complete a priori knowledge of the layout of the training environment to learn an effective exploration policy. However, such information is expensive to gather in terms of time and resources. In this work, we propose to train the model with a purely intrinsic reward signal to guide exploration, which is based on the impact of the robot's actions on its internal representation of the environment. So far, impact-based rewards have been employed for simple tasks and in procedurally generated synthetic environments with countable states. Since the number of states observable by the agent in realistic indoor environments is non-countable, we include a neural-based density model and replace the traditional count-based regularization with an estimated pseudo-count of previously visited states. The proposed exploration approach outperforms DRL-based competitors relying on intrinsic rewards and surpasses the agents trained with a dense extrinsic reward computed with the environment layouts. We also show that a robot equipped with the proposed approach seamlessly adapts to point-goal navigation and real-world deployment.
翻译:最近对室内环境的探索引起了很大的兴趣,这也是由于引进了以等级方式建造的、在模拟环境中经过深强化学习(DRL)培训的深层神经剂。目前最先进的方法采用密集的外表奖励,需要事先完全了解培训环境的布局,以学习有效的勘探政策。然而,从时间和资源方面收集这类信息的费用很高。在这项工作中,我们提议对模型进行纯内在奖励信号的培训,以指导勘探,该信号的基础是机器人的行动对环境内部代表性的影响。迄今为止,基于影响的奖励被用于简单的任务和在程序上产生的合成环境中使用,具有可计数的状态。由于该剂在现实的室内环境中观察到的州数量是不可计数的,因此我们列入一个基于神经的密度模型,用先前访问过的各州的估计假算取代传统的基于计数的正规化。提议的勘探方法比基于机器人的竞争者更符合内在奖赏标准,超过了经过密集的外部奖赏的代理人,超过经培训的代理人,且具有可观度,可观的合成环境的合成环境。我们还展示了一种可升级的、可升级的模型。我们还展示了真实地向世界定位的导航的方法。