Generalization in partially observed markov decision processes (POMDPs) is critical for successful applications of visual reinforcement learning (VRL) in real scenarios. A widely used idea is to learn task-relevant representations that encode task-relevant information of common features in POMDPs, i.e., rewards and transition dynamics. As transition dynamics in the latent state space -- which are task-relevant and invariant to visual distractions -- are unknown to the agents, existing methods alternatively use transition dynamics in the observation space to extract task-relevant information in transition dynamics. However, such transition dynamics in the observation space involve task-irrelevant visual distractions, degrading the generalization performance of VRL methods. To tackle this problem, we propose the reward sequence distribution conditioned on the starting observation and the predefined subsequent action sequence (RSD-OA). The appealing features of RSD-OA include that: (1) RSD-OA is invariant to visual distractions, as it is conditioned on the predefined subsequent action sequence without task-irrelevant information from transition dynamics, and (2) the reward sequence captures long-term task-relevant information in both rewards and transition dynamics. Experiments demonstrate that our representation learning approach based on RSD-OA significantly improves the generalization performance on unseen environments, outperforming several state-of-the-arts on DeepMind Control tasks with visual distractions.
翻译:部分观测到的马可夫决策流程(POMDPs)的普及对于在真实情景中成功应用视觉强化学习(VRL)至关重要。一个广泛使用的想法是学习与任务相关的表达方式,将任务相关的信息编码为POMDPs的共同特征,即奖赏和过渡动态。潜伏状态空间的过渡动态 -- -- 这些动态与任务相关,且不易引起视觉分心 -- -- 代理商并不熟悉,现有方法在观察空间使用过渡动态中的过渡动态来提取与任务相关的过渡动态信息。然而,观测空间的这种过渡动态涉及与任务相关的视觉分流,降低VRL方法的总体性表现。为了解决这一问题,我们建议以启动观测和预先确定的随后行动序列(RSD-OA)为条件,对潜在状态的过渡动态进行奖励序列分配。 RSD-OA的吸引力特征包括:(1) RSD-OA不易引起视觉分心,因为它取决于事先界定的随后行动序列,而没有从过渡动态中获取与任务相关的信息;以及(2)奖励序列显示与任务相关的长期任务相关的任务序列,以我们当前方向上的业绩变化动态为基础,展示了我们一般控制。