Constrained reinforcement learning (CRL) has gained significant interest recently, since safety constraints satisfaction is critical for real-world problems. However, existing CRL methods constraining discounted cumulative costs generally lack rigorous definition and guarantee of safety. In contrast, in the safe control research, safety is defined as persistently satisfying certain state constraints. Such persistent safety is possible only on a subset of the state space, called feasible set, where an optimal largest feasible set exists for a given environment. Recent studies incorporate feasible sets into CRL with energy-based methods such as control barrier function (CBF), safety index (SI), and leverage prior conservative estimations of feasible sets, which harms the performance of the learned policy. To deal with this problem, this paper proposes the reachability CRL (RCRL) method by using reachability analysis to establish the novel self-consistency condition and characterize the feasible sets. The feasible sets are represented by the safety value function, which is used as the constraint in CRL. We use the multi-time scale stochastic approximation theory to prove that the proposed algorithm converges to a local optimum, where the largest feasible set can be guaranteed. Empirical results on different benchmarks validate the learned feasible set, the policy performance, and constraint satisfaction of RCRL, compared to CRL and safe control baselines.
翻译:由于安全限制满意度对于现实世界问题至关重要,因此最近对强化强化学习(CRL)产生了很大的兴趣,因为安全限制满意度对于现实世界问题至关重要;然而,限制贴现累积成本的现有CRL方法通常缺乏严格的定义和安全保障;相反,在安全控制研究中,安全被定义为持续满足某些国家制约因素;这种持久性安全仅在国家空间的一个子块上才有可能,即所谓的可行成套系统,其中为特定环境提供了最佳、最可行的成套可行条件;最近的研究将基于能源的方法,如控制屏障功能、安全指数(SI)纳入CRL, 并利用先前对影响所学政策绩效的可行成套方法的保守估计;为解决这一问题,本文件提出CRRL(RCRL)的可达性方法,办法是利用可达性分析来确立新的自我一致性条件和确定可行的各套装。 可行的套套套用安全价值功能代表着安全价值功能,这是CRL的制约。 我们使用多时间尺度的近比理论来证明拟议的算法与地方最佳的最佳方法一致,其中最可行的一套是可行的套,可以保证RCRCR的满意度,并且比较了不同的标准。