While off-policy temporal difference (TD) methods have widely been used in reinforcement learning due to their efficiency and simple implementation, their Bayesian counterparts have not been utilized as frequently. One reason is that the non-linear max operation in the Bellman optimality equation makes it difficult to define conjugate distributions over the value functions. In this paper, we introduce a novel Bayesian approach to off-policy TD methods, called as ADFQ, which updates beliefs on state-action values, Q, through an online Bayesian inference method known as Assumed Density Filtering. We formulate an efficient closed-form solution for the value update by approximately estimating analytic parameters of the posterior of the Q-beliefs. Uncertainty measures in the beliefs not only are used in exploration but also provide a natural regularization for the value update considering all next available actions. ADFQ converges to Q-learning as the uncertainty measures of the Q-beliefs decrease and improves common drawbacks of other Bayesian RL algorithms such as computational complexity. We extend ADFQ with a neural network. Our empirical results demonstrate that ADFQ outperforms comparable algorithms on various Atari 2600 games, with drastic improvements in highly stochastic domains or domains with a large action space.
翻译:虽然政策外时间差异(TD)方法因其效率和简单实施而被广泛用于强化学习,但其贝耶斯对等方没有经常使用,其原因之一是贝尔曼最佳等式的非线性最大操作使得难以界定对价值函数的共性分布。在本文中,我们引入了一种新型的贝耶斯方法,称为ADFQ,用于更新关于国家-行动价值的信念,Q,用于通过称为“假设密度过滤”的网上巴伊西亚推论方法更新国家-行动价值的信念。我们为更新价值设计了一个高效的封闭式解决方案,为此我们大致估算了Q-beliefs的远端值的远端值分析参数。信仰中的不确定措施不仅用于探索,而且还为考虑到今后所有现有行动的价值更新提供了自然的规范化。ADFQ与Q的不确定性测量方法相匹配,并改进了其他Bayesian RL算法,如计算复杂度等。我们将ADFQ扩展了AFQ, 并展示了我们高度空间演算域的大规模演算模型。