Deep reinforcement learning (DRL) has become a dominant deep-learning paradigm for tasks where complex policies are learned within reactive systems. Unfortunately, these policies are known to be susceptible to bugs. Despite significant progress in DNN verification, there has been little work demonstrating the use of modern verification tools on real-world, DRL-controlled systems. In this case study, we attempt to begin bridging this gap, and focus on the important task of mapless robotic navigation -- a classic robotics problem, in which a robot, usually controlled by a DRL agent, needs to efficiently and safely navigate through an unknown arena towards a target. We demonstrate how modern verification engines can be used for effective model selection, i.e., selecting the best available policy for the robot in question from a pool of candidate policies. Specifically, we use verification to detect and rule out policies that may demonstrate suboptimal behavior, such as collisions and infinite loops. We also apply verification to identify models with overly conservative behavior, thus allowing users to choose superior policies, which might be better at finding shorter paths to a target. To validate our work, we conducted extensive experiments on an actual robot, and confirmed that the suboptimal policies detected by our method were indeed flawed. We also demonstrate the superiority of our verification-driven approach over state-of-the-art, gradient attacks. Our work is the first to establish the usefulness of DNN verification in identifying and filtering out suboptimal DRL policies in real-world robots, and we believe that the methods presented here are applicable to a wide range of systems that incorporate deep-learning-based agents.
翻译:深入强化学习(DRL)已经成为在反应系统中学习复杂政策的任务的主要深层学习模式。 不幸的是,这些政策已知容易出现错误。 尽管在DNN核查方面取得了显著进展,但是在现实世界、DRL控制的系统中,没有开展多少工作来证明现代核查工具的使用。 在案例研究中,我们试图缩小这一差距,并侧重于无地图机器人导航这一重要任务 -- -- 一个典型的机器人问题,在这个问题上,一个通常由DRL代理控制的机器人需要通过一个未知的舞台高效率和安全地对一个目标进行导航。我们展示了现代核查引擎如何能够用于有效的模型选择。尽管在DNNNN的核查方面取得了显著进展,但是在现实世界、DNNNNN的核查工具方面,选择了最佳的机器人政策。具体地说,我们利用核查来探测和排除可能显示亚优劣行为的政策,例如碰撞和无限的环环环。我们还应用核查来鉴别模式,从而使用户能够选择更优的政策,从而找到更短的路径。为了验证我们的工作,我们在这里对实际的机器人进行广泛的试验,我们所选的机器人和梯级核查方法进行了广泛的试验,我们所测测得的RD(我们所测得的RPL)的方法是比的方法。