The Robot Operating System (ROS) is a widely used framework for building robotic systems. It offers a wide variety of reusable packages and a pattern for new developments. It is up to developers how to combine these elements and integrate them with decision-making for autonomous behavior. The feature of such decision-making that is in general valued the most is safety assurance. In this research preview, we present a formal approach for generating safe autonomous decision-making in ROS. We first describe how to improve our existing static verification approach to verify multi-goal multi-agent decision-making. After that, we describe how to transition from the improved static verification approach to the proposed runtime verification approach. An initial implementation of this research proposal yields promising results.
翻译:机器人操作系统(ROS)是建立机器人系统的一个广泛使用的框架,它提供了各种可重复使用的软件包和新开发模式。它取决于开发者如何将这些要素结合起来,并将其与自主行为决策相结合。这种决策通常最受重视的特征是安全保障。在这个研究预览中,我们提出了一个在ROS中进行安全自主决策的正式方法。我们首先描述了如何改进现有的静态核查方法,以核查多目标多剂决策。随后,我们描述了如何从改进的静态核查方法过渡到拟议的运行时核查方法。本研究提案的初步实施产生了良好的结果。