We introduce a class of variational actor-critic algorithms based on a variational formulation over both the value function and the policy. The objective function of the variational formulation consists of two parts: one for maximizing the value function and the other for minimizing the Bellman residual. Besides the vanilla gradient descent with both the value function and the policy updates, we propose two variants, the clipping method and the flipping method, in order to speed up the convergence. We also prove that, when the prefactor of the Bellman residual is sufficiently large, the fixed point of the algorithm is close to the optimal policy.
翻译:我们引入了一类基于价值函数和政策的变式配方的变式行为者-批评算法。变式配方的客观功能由两部分组成:一是价值函数最大化,另一是尽量减少贝尔曼残余物。除了具有价值函数和政策更新的香草梯度下降外,我们还提出了两种变式,即剪裁法和翻转法,以加快趋同速度。我们还证明,当贝尔曼残余物的预产物足够大时,算法的固定点接近最佳政策。