While off-policy temporal difference (TD) methods have widely been used in reinforcement learning due to their efficiency and simple implementation, their Bayesian counterparts have not been utilized as frequently. One reason is that the non-linear max operation in the Bellman optimality equation makes it difficult to define conjugate distributions over the value functions. In this paper, we introduce a novel Bayesian approach to off-policy TD methods, called as ADFQ, which updates beliefs on state-action values, Q, through an online Bayesian inference method known as Assumed Density Filtering. In order to formulate a closed-form update, we approximately estimate analytic parameters of the posterior of the Q-beliefs. Uncertainty measures in the beliefs not only are used in exploration but also provide a natural regularization for learning. We show that ADFQ converges to Q-learning as the uncertainty measures of the Q-beliefs decrease. ADFQ improves common drawbacks of other Bayesian RL algorithms such as computational complexity. We also extend ADFQ with a neural network. Our empirical results demonstrate that the proposed ADFQ algorithm outperforms comparable algorithms on various domains including continuous state domains and games from the Arcade Learning Environment.
翻译:虽然政策外时间差异(TD)方法因其效率和简单实施而被广泛用于强化学习,但其贝耶斯对等方没有经常使用,其中一个原因是贝尔曼最佳等式的非线性最大操作使得难以界定对价值函数的共性分布。在本文中,我们引入了一种新型的贝耶斯方法,称为ADFQ,用于更新关于州-行动值的信念,Q,用于通过称为“假设密度过滤”的网上巴伊西亚推论方法更新国家-行动值。为了制定封闭式更新,我们大致估计了Q-Beliefes的后方的反射参数。信仰中的不确定措施不仅用于探索,而且还提供了自然的学习规范。我们表明,ADFQ与Q相融合,作为Q-belifes的不确定性衡量尺度。ADFQ改进了其他巴伊西亚 RL 算法的常见调控方法,如计算复杂度。我们还将ADFQ的反射程参数与可比较的ADF 网络扩展了ADFQ,包括一系列环境演算模型。