Our work is based on the hypothesis that a model-free agent whose representations are predictive of properties of future states (beyond expected rewards) will be more capable of solving and adapting to new RL problems. To test that hypothesis, we introduce an objective based on Deep InfoMax (DIM) which trains the agent to predict the future by maximizing the mutual information between its internal representation of successive timesteps. We provide an intuitive analysis of the convergence properties of our approach from the perspective of Markov chain mixing times and argue that convergence of the lower bound on mutual information is related to the inverse absolute spectral gap of the transition model. We test our approach in several synthetic settings, where it successfully learns representations that are predictive of the future. Finally, we augment C51, a strong RL baseline, with our temporal DIM objective and demonstrate improved performance on a continual learning task and on the recently introduced Procgen environment.
翻译:我们的工作所依据的假设是,一个无模型的代理机构,其表现是对未来国家特性的预测(超出预期的回报)将更有能力解决和适应新的RL问题。为了检验这一假设,我们引入了一个基于深信息Max(DIM)的目标,通过最大限度地增加其连续时间步骤的内部代表之间的相互信息来培训该代理机构预测未来。我们从Markov链条混合时间的角度对我们方法的趋同特性进行了直观的分析,并论证相互信息的较低约束程度的趋同与过渡模式的反绝对光谱差距有关。我们在若干合成环境中测试了我们的方法,在其中成功了解到了对未来的预测。最后,我们用我们的时间DIM目标扩大了C51这一强大的RL基线,并展示了在持续学习任务和最近引入的丙根环境方面的改进表现。