We introduce a novel perspective on Bayesian reinforcement learning (RL); whereas existing approaches infer a posterior over the transition distribution or Q-function, we characterise the uncertainty in the Bellman operator. Our Bayesian Bellman operator (BBO) framework is motivated by the insight that when bootstrapping is introduced, model-free approaches actually infer a posterior over Bellman operators, not value functions. In this paper, we use BBO to provide a rigorous theoretical analysis of model-free Bayesian RL to better understand its relationshipto established frequentist RL methodologies. We prove that Bayesian solutions are consistent with frequentist RL solutions, even when approximate inference isused, and derive conditions for which convergence properties hold. Empirically, we demonstrate that algorithms derived from the BBO framework have sophisticated deep exploration properties that enable them to solve continuous control tasks at which state-of-the-art regularised actor-critic algorithms fail catastrophically
翻译:我们引入了一种关于贝叶斯加固学习的新视角(RL);而现有方法将过渡分布或Q功能的后方推入后方,我们则将Bellman操作员的不确定性定性为Bellman操作员的不确定性。我们的Bayesian Bellman操作员(BBO)框架的动机是,在引入靴子时,没有模型的无方法实际上将后方推入Bellman操作员的后方,而不是价值功能。在本文中,我们利用BBBO对无模型的Bayesian RL提供严格的理论分析,以更好地了解其与常态RL方法的关系。我们证明,Bayesian解决方案与常态RL解决方案是一致的,即使使用了近似于推论的RL解决方案,并得出了趋同特性所坚持的条件。我们很生动地证明,BBO框架的算法具有复杂的深层次探索属性,使其能够解决当前常规的行为体-批评算法无法灾难性的连续控制任务。