Reasoning about the future -- understanding how decisions in the present time affect outcomes in the future -- is one of the central challenges for reinforcement learning (RL), especially in highly-stochastic or partially observable environments. While predicting the future directly is hard, in this work we introduce a method that allows an agent to "look into the future" without explicitly predicting it. Namely, we propose to allow an agent, during its training on past experience, to observe what \emph{actually} happened in the future at that time, while enforcing an information bottleneck to avoid the agent overly relying on this privileged information. This gives our agent the opportunity to utilize rich and useful information about the future trajectory dynamics in addition to the present. Our method, Policy Gradients Incorporating the Future (PGIF), is easy to implement and versatile, being applicable to virtually any policy gradient algorithm. We apply our proposed method to a number of off-the-shelf RL algorithms and show that PGIF is able to achieve higher reward faster in a variety of online and offline RL domains, as well as sparse-reward and partially observable environments.
翻译:思考未来 -- -- 理解当前决策如何影响未来结果 -- -- 是强化学习(RL)的核心挑战之一,特别是在高度随机或部分可观测的环境中。在直接预测未来是困难的。在这项工作中,我们引入了一种方法,使代理人能够“展望未来”而无需明确预测未来。也就是说,我们提议允许代理人在根据以往经验进行的培训期间,观察过去发生的事情,同时实施信息瓶颈,以避免代理人过度依赖这一特权信息。这使我们的代理人有机会利用关于未来轨迹动态的丰富和有用信息,除了现在之外,还利用关于未来轨迹动态的丰富和有用信息。我们的方法,即“纳入未来的政策梯度(PGIF)”易于实施和灵活变通,并适用于几乎所有的政策梯度算。我们建议的方法适用于一些现成的RL算法,并表明PGIF能够更快地在各种在线和离线的RL域内获得更高的报酬,以及零位和部分可观测环境。