The vision-based reinforcement learning (RL) has achieved tremendous success. However, generalizing vision-based RL policy to unknown test environments still remains as a challenging problem. Unlike previous works that focus on training a universal RL policy that is invariant to discrepancies between test and training environment, we focus on developing an independent module to disperse interference factors irrelevant to the task, thereby providing "clean" observations for the RL policy. The proposed unsupervised visual attention and invariance method (VAI) contains three key components: 1) an unsupervised keypoint detection model which captures semantically meaningful keypoints in observations; 2) an unsupervised visual attention module which automatically generates the distraction-invariant attention mask for each observation; 3) a self-supervised adapter for visual distraction invariance which reconstructs distraction-invariant attention mask from observations with artificial disturbances generated by a series of foreground and background augmentations. All components are optimized in an unsupervised way, without manual annotation or access to environment internals, and only the adapter is used during inference time to provide distraction-free observations to RL policy. VAI empirically shows powerful generalization capabilities and significantly outperforms current state-of-the-art (SOTA) method by 15% to 49% in DeepMind Control suite benchmark and 61% to 229% in our proposed robot manipulation benchmark, in term of cumulative rewards per episode.
翻译:以愿景为基础的强化学习(RL)取得了巨大的成功。然而,将基于愿景的RL政策推广到未知的测试环境仍然是一个挑战性的问题。与以往侧重于培训通用RL政策、而该政策与测试和培训环境之间差异不一的工作不同,我们侧重于开发一个独立的模块,以分散与任务无关的干扰因素,从而为RL政策提供“清洁”的观测。拟议的未经监督的视觉关注和逆差方法包含三个关键组成部分:1)一个不受监督的关键点探测模型,该模型捕捉到观测中具有意义意义的语义关键点;2)一个不受监督的视觉关注模块,该模块自动生成对每次观测产生分散和不易变异的注意力遮罩;3)一个自我监督的调整器,用于视觉分散与任务无关的干扰因素,为RL政策提供“清洁”观察的干扰。所有组成部分都以未经监督的方式优化,没有手动的注释或进入环境内部,而且只有调整器在推断时间里只使用不动的视觉透视像的视觉关注模块,为15个无偏差的兰基基调标准,为我们总的 RSOV-RCRAFA的常规定位,在常规基准中提供了大幅的自我定位定位中,在常规定位中,为15的定位中,以不动的自我定位的自我定位的定位的定位的定位的定位。