Safe reinforcement learning (RL) studies problems where an intelligent agent has to not only maximize reward but also avoid exploring unsafe areas. In this study, we propose CUP, a novel policy optimization method based on Constrained Update Projection framework that enjoys rigorous safety guarantee. Central to our CUP development is the newly proposed surrogate functions along with the performance bound. Compared to previous safe RL methods, CUP enjoys the benefits of 1) CUP generalizes the surrogate functions to generalized advantage estimator (GAE), leading to strong empirical performance. 2) CUP unifies performance bounds, providing a better understanding and interpretability for some existing algorithms; 3) CUP provides a non-convex implementation via only first-order optimizers, which does not require any strong approximation on the convexity of the objectives. To validate our CUP method, we compared CUP against a comprehensive list of safe RL baselines on a wide range of tasks. Experiments show the effectiveness of CUP both in terms of reward and safety constraint satisfaction. We have opened the source code of CUP at this link https://github.com/zmsn-2077/ CUP-safe-rl.
翻译:安全强化学习(RL)研究智能剂不仅必须最大限度地奖励,而且还要避免探索不安全地区的问题。在这项研究中,我们提议CUP,这是基于严格安全保障的受控更新预测框架的新的政策优化方法;我们的CUP开发的核心是新提议的代用功能和性能约束。与以前的安全RL方法相比,CUP享有的好处是:(1) CUP将代用功能概括化为普遍优势估计仪(GAE),导致强大的实证性能。(2) CUP统一性能界限,为某些现有算法提供更好的理解和解释;(3) CUP仅通过一级优化器提供非convex执行,这不需要对目标的共性作任何强烈的近似。为了验证我们的CUP方法,我们将CUP与一系列任务的安全RL基线综合清单进行比较。实验显示CUP在奖励和安全约束性满意度方面的有效性。我们在此链接了CUP的源码 https://github.com/zm/ safe.77/CUP-SAS.