This work proposes a novel model-free Reinforcement Learning (RL) agent that is able to learn how to complete an unknown task having access to only a part of the input observation. We take inspiration from the concepts of visual attention and active perception that are characteristic of humans and tried to apply them to our agent, creating a hard attention mechanism. In this mechanism, the model decides first which region of the input image it should look at, and only after that it has access to the pixels of that region. Current RL agents do not follow this principle and we have not seen these mechanisms applied to the same purpose as this work. In our architecture, we adapt an existing model called recurrent attention model (RAM) and combine it with the proximal policy optimization (PPO) algorithm. We investigate whether a model with these characteristics is capable of achieving similar performance to state-of-the-art model-free RL agents that access the full input observation. This analysis is made in two Atari games, Pong and SpaceInvaders, which have a discrete action space, and in CarRacing, which has a continuous action space. Besides assessing its performance, we also analyze the movement of the attention of our model and compare it with what would be an example of the human behavior. Even with such visual limitation, we show that our model matches the performance of PPO+LSTM in two of the three games tested.
翻译:这项工作提出了一个新型的无模型强化学习(RL)代理器,它能够学习如何完成一个仅能访问部分输入观察的未知任务。我们从视觉关注和积极认识的概念中得到灵感,这些概念是人类的特征,我们试图将其应用到我们的代理器中,创建了一个难于关注的机制。在这个机制中,模型首先决定它应该查看的输入图像的区域,并且只有在它能够接触到该地区像素之后。目前的RL代理器没有遵循这一原则,我们也没有看到这些机制适用于与这项工作相同的目的。在我们的结构中,我们调整了一种称为经常性关注模型(RAM)的现有模型,并将其与最接近的政策优化算法结合起来。我们调查了具有这些特征的模型是否能够达到与获得全部输入观察的状态无模型RL代理器类似的性能。这一分析是在两个Atari游戏(Pong和Space Invaders)中进行的,这些游戏拥有一个分离的行动空间,我们没有看到这些机制适用于与这项工作相同的目的。在卡拉辛(CarRacing)中有一个持续的行动空间。我们除了评估其表现的模型外,我们还要分析我们的三种视觉行为,我们还展示了人类的模范像。