Value function is the central notion of Reinforcement Learning (RL). Value estimation, especially with function approximation, can be challenging since it involves the stochasticity of environmental dynamics and reward signals that can be sparse and delayed in some cases. A typical model-free RL algorithm usually estimates the values of a policy by Temporal Difference (TD) or Monte Carlo (MC) algorithms directly from rewards, without explicitly taking dynamics into consideration. In this paper, we propose Value Decomposition with Future Prediction (VDFP), providing an explicit two-step understanding of the value estimation process: 1) first foresee the latent future, 2) and then evaluate it. We analytically decompose the value function into a latent future dynamics part and a policy-independent trajectory return part, inducing a way to model latent dynamics and returns separately in value estimation. Further, we derive a practical deep RL algorithm, consisting of a convolutional model to learn compact trajectory representation from past experiences, a conditional variational auto-encoder to predict the latent future dynamics and a convex return model that evaluates trajectory representation. In experiments, we empirically demonstrate the effectiveness of our approach for both off-policy and on-policy RL in several OpenAI Gym continuous control tasks as well as a few challenging variants with delayed reward.
翻译:值值函数是强化学习的核心概念。 价值估算,特别是功能近似,可能具有挑战性,因为它涉及环境动态和奖励信号的随机性,在某些情况下,这些动态和奖励信号可能会稀疏和延迟。典型的无模型的RL算法通常直接从奖励中估算政策值,而没有明确考虑到动态因素。 在本文件中,我们提议与未来预测(VDFP)进行价值分解,对价值估算过程提供明确的两步理解:1)首先预见潜在的未来,2)然后评估它。我们分析将价值函数分解成潜在的未来动态部分,并分析一个独立于政策的轨迹回归部分,从而产生一种模型潜在动态和价值估算分别回报的方法。此外,我们从一个实用的深刻的RL算法中得出了一个深层次的深度的RL算法,它包括从过去的经验中学习紧凑的轨迹代表,一个有条件的变式自动算法,用来预测潜在的未来动态,以及一个评价轨迹代表模型。在实验中,我们用经验性地展示了我们作为具有挑战性的G-R-R-R-R-R-L-L-长期控制方法的有效性。