While Deep Reinforcement Learning (DRL) provides transformational capabilities to the control of Robotics and Autonomous Systems (RAS), the black-box nature of DRL and uncertain deployment-environments of RAS pose new challenges on its dependability. Although there are many existing works imposing constraints on the DRL policy to ensure a successful completion of the mission, it is far from adequate in terms of assessing the DRL-driven RAS in a holistic way considering all dependability properties. In this paper, we formally define a set of dependability properties in temporal logic and construct a Discrete-Time Markov Chain (DTMC) to model the dynamics of risk/failures of a DRL-driven RAS interacting with the stochastic environment. We then do Probabilistic Model Checking based on the designed DTMC to verify those properties. Our experimental results show that the proposed method is effective as a holistic assessment framework, while uncovers conflicts between the properties that may need trade-offs in the training. Moreover, we find the standard DRL training cannot improve dependability properties, thus requiring bespoke optimisation objectives concerning them. Finally, our method offers a novel dependability analysis to the Sim-to-Real challenge of DRL.
翻译:虽然深度强化学习(DRL)为控制机器人和自主系统提供了转化能力,但DRL的黑箱性质和不确定的部署环境给RAS的可靠性带来了新的挑战。虽然许多现有工程对DRL政策施加限制,以确保任务顺利完成,但从整体评估DRL驱动的RAS的方法来看,考虑到所有可靠性特性,还远远不够充分。在本文件中,我们正式界定了一套时间逻辑上的可靠性特性,并构建了一个分立时间Markov链(DMC),以模拟DRL驱动的RAS与随机环境互动的风险/失败动态。我们随后根据设计DTMC进行概率模型核对,以核实这些特性。我们的实验结果表明,拟议方法作为一个整体评估框架是有效的,同时发现在培训中可能需要权衡的属性之间的冲突。此外,我们发现标准DRL培训无法改进可靠性,因此要求我们进行可靠的可靠度分析。最后,我们提出的可靠度分析是可靠的方法。