Satisfying safety constraints almost surely (or with probability one) can be critical for the deployment of Reinforcement Learning (RL) in real-life applications. For example, plane landing and take-off should ideally occur with probability one. We address the problem by introducing Safety Augmented (Saute) Markov Decision Processes (MDPs), where the safety constraints are eliminated by augmenting them into the state-space and reshaping the objective. We show that Saute MDP satisfies the Bellman equation and moves us closer to solving Safe RL with constraints satisfied almost surely. We argue that Saute MDP allows viewing the Safe RL problem from a different perspective enabling new features. For instance, our approach has a plug-and-play nature, i.e., any RL algorithm can be "Sauteed". Additionally, state augmentation allows for policy generalization across safety constraints. We finally show that Saute RL algorithms can outperform their state-of-the-art counterparts when constraint satisfaction is of high importance.
翻译:在现实应用中,满足安全限制几乎肯定(或概率一)对于部署强化学习(RL)至关重要。例如,飞机着陆和起飞最好在概率一的情况下发生。我们通过引入安全增强(Saute)Markov决定程序(MDPs)来解决这个问题,通过将安全限制扩大到州空间和调整目标来消除安全限制。我们显示Saute MDP满足了贝尔曼方程式,使我们更接近解决安全RL,但几乎肯定有限制。我们认为,Saute MDP允许从不同的角度查看安全RL问题,从而产生新的功能。例如,我们的方法具有插插和玩性质,即任何RL算法都可以“Sauteed” 。此外,州增强允许在安全限制之间实现政策普遍化。我们最后显示,Saute RL算法可以在限制满足具有高度重要性时超越其最先进的对应方。