Trust region methods are a popular tool in reinforcement learning as they yield robust policy updates in continuous and discrete action spaces. However, enforcing such trust regions in deep reinforcement learning is difficult. Hence, many approaches, such as Trust Region Policy Optimization (TRPO) and Proximal Policy Optimization (PPO), are based on approximations. Due to those approximations, they violate the constraints or fail to find the optimal solution within the trust region. Moreover, they are difficult to implement, often lack sufficient exploration, and have been shown to depend on seemingly unrelated implementation choices. In this work, we propose differentiable neural network layers to enforce trust regions for deep Gaussian policies via closed-form projections. Unlike existing methods, those layers formalize trust regions for each state individually and can complement existing reinforcement learning algorithms. We derive trust region projections based on the Kullback-Leibler divergence, the Wasserstein L2 distance, and the Frobenius norm for Gaussian distributions. We empirically demonstrate that those projection layers achieve similar or better results than existing methods while being almost agnostic to specific implementation choices. The code is available at https://git.io/Jthb0.
翻译:信任区域方法是强化学习的流行工具,因为它们在连续和分散的行动空间产生强有力的政策更新。然而,在深度强化学习中执行这种信任区域是困难的。因此,许多方法,例如信任区域政策优化和优化政策优化方法,都是以近似值为基础的。由于这些近似值,它们违反了制约,或者未能在信任区域找到最佳解决办法。此外,它们难以执行,往往缺乏足够的探索,并被证明取决于似乎无关的执行选择。在这项工作中,我们建议不同的神经网络层通过封闭式预测执行深高斯政策的信任区域。与现有方法不同,这些层次使每个州的信任区域正式化,可以补充现有的强化学习算法。我们根据库尔贝克-利伯尔差异、瓦塞斯坦L2距离和高斯分布的Frobenius规范,获得信任区域预测。我们从经验上证明,这些预测层取得了类似或更好的结果,同时对具体的执行选择几乎具有想象力。