Learned models of the environment provide reinforcement learning (RL) agents with flexible ways of making predictions about the environment. In particular, models enable planning, i.e. using more computation to improve value functions or policies, without requiring additional environment interactions. In this work, we investigate a way of augmenting model-based RL, by additionally encouraging a learned model and value function to be jointly \emph{self-consistent}. Our approach differs from classic planning methods such as Dyna, which only update values to be consistent with the model. We propose multiple self-consistency updates, evaluate these in both tabular and function approximation settings, and find that, with appropriate choices, self-consistency helps both policy evaluation and control.
翻译:环境模型的学习模式为强化学习提供了灵活的环境预测方法,特别是模型能够进行规划,即利用更多的计算来改进价值功能或政策,而不需要额外的环境互动。在这项工作中,我们研究一种增强基于模型的RL的方法,方法是进一步鼓励一个学习的模型和价值功能成为共同的\emph{自我一致性}。我们的方法不同于典型的规划方法,例如Dyna,它只更新数值以符合模型。我们提出了多种自定性更新,在表格和功能近似环境中评价这些,发现如果有适当的选择,自我一致性有助于政策评价和控制。