We address the issue of safety in reinforcement learning. We pose the problem in a discounted infinite-horizon constrained Markov decision process framework. Existing results have shown that gradient-based methods are able to achieve an $\mathcal{O}(1/\sqrt{T})$ global convergence rate both for the optimality gap and the constraint violation. We exhibit a natural policy gradient-based algorithm that has a faster convergence rate $\mathcal{O}(\log(T)/T)$ for both the optimality gap and the constraint violation. When Slater's condition is satisfied and known a priori, zero constraint violation can be further guaranteed for a sufficiently large $T$ while maintaining the same convergence rate.
翻译:我们解决了加强学习的安全问题。我们在一个折扣的无穷而受限制的马尔科夫决定程序框架内提出了这一问题。现有的结果表明,基于梯度的方法能够达到美元=mathcal{O}(1/\\sqrt{T})全球趋同率,既能达到最佳性差距,也能达到限制性违反。我们展示了一种基于自然政策梯度的算法,这种算法对于最佳性差距和限制性违反都具有更快的趋同率 $\mathcal{O}(\log(T)/T) 。当斯莱特的状态得到满足并已知其先验时,可以进一步保证足够大的T$的零约束性违反率,同时保持同样的趋同率。