Multi-Agent Reinforcement Learning (MARL) discovers policies that maximize reward but do not have safety guarantees during the learning and deployment phases. Although shielding with Linear Temporal Logic (LTL) is a promising formal method to ensure safety in single-agent Reinforcement Learning (RL), it results in conservative behaviors when scaling to multi-agent scenarios. Additionally, it poses computational challenges for synthesizing shields in complex multi-agent environments. This work introduces Model-based Dynamic Shielding (MBDS) to support MARL algorithm design. Our algorithm synthesizes distributive shields, which are reactive systems running in parallel with each MARL agent, to monitor and rectify unsafe behaviors. The shields can dynamically split, merge, and recompute based on agents' states. This design enables efficient synthesis of shields to monitor agents in complex environments without coordination overheads. We also propose an algorithm to synthesize shields without prior knowledge of the dynamics model. The proposed algorithm obtains an approximate world model by interacting with the environment during the early stage of exploration, making our MBDS enjoy formal safety guarantees with high probability. We demonstrate in simulations that our framework can surpass existing baselines in terms of safety guarantees and learning performance.
翻译:多智能体强化学习(MARL)可发现最大化回报的策略,但在学习和部署阶段没有安全保障。虽然使用线性时态逻辑(LTL)的屏蔽是确保单一智能体强化学习(RL)安全的有前途的形式方法,但当扩展到多智能体场景时,会导致保守行为。此外,在复杂的多智能体环境中合成屏蔽面临计算挑战。本文介绍了基于模型的动态屏蔽(MBDS)以支持MARL算法设计。我们的算法合成了分布式屏障,这些屏障是与每个MARL智能体并行运行的反应性系统,用于监测和纠正不安全的行为。屏蔽可以根据智能体的状态动态地分裂、合并和重新计算。这种设计使得在复杂的环境中,能够高效地合成屏蔽以监测智能体,而无需协调开销。我们还提出了一种不需要先验知识的合成屏蔽算法。通过与环境交互以建立近似世界模型,使我们的MBDS能以高概率享有正式的安全保障。我们在模拟中展示了我们的框架在安全保障和学习效果方面优于现有的基准系统。