Reinforcement learning (RL) is an area of significant research interest, and safe RL in particular is attracting attention due to its ability to handle safety-driven constraints that are crucial for real-world applications of RL algorithms. This work proposes a novel approach to RL training, called control invariant set (CIS) enhanced RL, which leverages the benefits of CIS to improve stability guarantees and sampling efficiency. The approach consists of two learning stages: offline and online. In the offline stage, CIS is incorporated into the reward design, initial state sampling, and state reset procedures. In the online stage, RL is retrained whenever the state is outside of CIS, which serves as a stability criterion. A backup table that utilizes the explicit form of CIS is obtained to ensure the online stability. To evaluate the proposed approach, we apply it to a simulated chemical reactor. The results show a significant improvement in sampling efficiency during offline training and closed-loop stability in the online implementation.
翻译:强化学习是一个备受关注的研究领域,尤其是安全强化学习正因其能够处理对于强化学习算法实际应用至关重要的安全驱动约束而受到关注。本文提出了一种新的强化学习训练方法,称为控制不变集 (CIS) 增强型强化学习,它利用了 CIS 的优点以提高稳定性保证和采样效率。该方法由离线和在线两个学习阶段组成。离线阶段将 CIS 纳入到奖励设计、初始状态采样和状态重置过程中。在线阶段将在状态超出 CIS 时重新训练 RL,CI