Deploying reinforcement learning (RL) safely in the real world is challenging, as policies trained in simulators must face the inevitable sim-to-real gap. Robust safe RL techniques are provably safe, however difficult to scale, while domain randomization is more practical yet prone to unsafe behaviors. We address this gap by proposing SPiDR, short for Sim-to-real via Pessimistic Domain Randomization -- a scalable algorithm with provable guarantees for safe sim-to-real transfer. SPiDR uses domain randomization to incorporate the uncertainty about the sim-to-real gap into the safety constraints, making it versatile and highly compatible with existing training pipelines. Through extensive experiments on sim-to-sim benchmarks and two distinct real-world robotic platforms, we demonstrate that SPiDR effectively ensures safety despite the sim-to-real gap while maintaining strong performance.
翻译:在现实世界中安全部署强化学习(RL)具有挑战性,因为在模拟器中训练的策略必须面对不可避免的模拟到现实差距。鲁棒的安全RL技术虽可证明是安全的,但难以扩展;而领域随机化方法更实用,却容易产生不安全行为。为弥补这一差距,我们提出了SPiDR(Sim-to-real via Pessimistic Domain Randomization的简称)——一种具有可证明保证的、可扩展的安全模拟到现实迁移算法。SPiDR利用领域随机化将模拟到现实差距的不确定性纳入安全约束,使其具有通用性,并与现有训练流程高度兼容。通过在模拟到模拟基准测试以及两个不同的真实世界机器人平台上进行大量实验,我们证明SPiDR能在保持强劲性能的同时,有效确保在模拟到现实差距存在下的安全性。