Safety and robustness are two desired properties for any reinforcement learning algorithm. CMDPs can handle additional safety constraints and RMDPs can perform well under model uncertainties. In this paper, we propose to unite these two frameworks resulting in robust constrained MDPs (RCMDPs). The motivation is to develop a framework that can satisfy safety constraints while also simultaneously offer robustness to model uncertainties. We develop the RCMDP objective, derive gradient update formula to optimize this objective and then propose policy gradient based algorithms. We also independently propose Lyapunov based reward shaping for RCMDPs, yielding better stability and convergence properties.
翻译:安全和稳健性是任何强化学习算法的两个理想特性。 CMDP可以处理额外的安全限制,RMDP也可以在模型不确定性下很好地发挥作用。在本文件中,我们提议将这两个框架合并起来,从而形成强有力的MDP(RCMDPs),目的是制定一个能够满足安全限制的框架,同时也为模型不确定性提供稳健性。我们制定了RCMDP目标,得出梯度更新公式以优化这一目标,然后提出基于政策梯度的算法。我们还独立地提议为RCMDPs制作基于Lyapunov的奖励,从而产生更好的稳定性和趋同性。