Safety is one of the biggest concerns to applying reinforcement learning (RL) to the physical world. In its core part, it is challenging to ensure RL agents persistently satisfy a hard state constraint without white-box or black-box dynamics models. This paper presents an integrated model learning and safe control framework to safeguard any agent, where its dynamics are learned as Gaussian processes. The proposed theory provides (i) a novel method to construct an offline dataset for model learning that best achieves safety requirements; (ii) a parameterization rule for safety index to ensure the existence of safe control; (iii) a safety guarantee in terms of probabilistic forward invariance when the model is learned using the aforementioned dataset. Simulation results show that our framework guarantees almost zero safety violation on various continuous control tasks.
翻译:安全是将强化学习(RL)应用到物理世界的最大关切之一。在其核心部分,确保RL代理物持续满足硬性国家约束,而不使用白箱或黑盒动态模型,具有挑战性。本文件提出了一个综合示范学习和安全控制框架,以保障任何代理物的安全,因为其动态是作为Gaussian过程学习的。拟议的理论提供了(一) 为模型学习构建离线数据集的新颖方法,以最能达到安全要求;(二) 安全指数参数化规则,以确保安全控制的存在;(三) 在模型使用上述数据集学习时,从概率性前向变化的角度提供安全保障。模拟结果表明,我们的框架保证在各种连续控制任务上几乎不会发生安全侵犯。