Reinforcement learning aims to learn optimal policies from interaction with environments whose dynamics are unknown. Many methods rely on the approximation of a value function to derive near-optimal policies. In partially observable environments, these functions depend on the complete sequence of observations and past actions, called the history. In this work, we show empirically that recurrent neural networks trained to approximate such value functions internally filter the posterior probability distribution of the current state given the history, called the belief. More precisely, we show that, as a recurrent neural network learns the Q-function, its hidden states become more and more correlated with the beliefs of state variables that are relevant to optimal control. This correlation is measured through their mutual information. In addition, we show that the expected return of an agent increases with the ability of its recurrent architecture to reach a high mutual information between its hidden states and the beliefs. Finally, we show that the mutual information between the hidden states and the beliefs of variables that are irrelevant for optimal control decreases through the learning process. In summary, this work shows that in its hidden states, a recurrent neural network approximating the Q-function of a partially observable environment reproduces a sufficient statistic from the history that is correlated to the relevant part of the belief for taking optimal actions.
翻译:强化学习的目的是从与不为人知的环境的互动中学习最佳政策。 许多方法依靠价值函数的近似接近来得出接近最佳的政策。 在部分可见的环境中, 这些功能取决于观测和过去行动的完整顺序, 称为历史。 在这项工作中, 我们从经验上表明, 受过训练的经常神经网络在内部过滤当前状态的事后概率分布, 称之为信念。 更准确地说, 我们显示, 当一个经常性的神经网络学习Q功能时, 其隐藏状态与与与与与最佳控制相关的国家变量的信念越来越紧密地相关。 这种关联性通过其相互信息加以衡量。 此外, 我们表明, 一种代理人的预期回报随着其经常结构的能力而增加, 从而能够在其隐藏状态和信仰之间达成高度的相互信息。 最后, 我们表明, 隐藏状态和与与对最佳控制无关的变量的信念之间的相互信息通过学习过程减少。 总之, 这项工作表明, 在其隐藏状态中, 一个经常性的神经网络与与与与与与最佳控制相关的状态的状态的观念的观念越来越相关联。 此外, 一个部分观测环境的关联性是, 将一个从某种最优化的历史变化复制为充分的统计数据。