Recent advances in learning-based perception systems have led to drastic improvements in the performance of robotic systems like autonomous vehicles and surgical robots. These perception systems, however, are hard to analyze and errors in them can propagate to cause catastrophic failures. In this paper, we consider the problem of synthesizing safe and robust controllers for robotic systems which rely on complex perception modules for feedback. We propose a counterexample-guided synthesis framework that iteratively builds simple surrogate models of the complex perception module and enables us to find safe control policies. The framework uses a falsifier to find counterexamples, or traces of the systems that violate a safety property, to extract information that enables efficient modeling of the perception modules and errors in it. These models are then used to synthesize controllers that are robust to errors in perception. If the resulting policy is not safe, we gather new counterexamples. By repeating the process, we eventually find a controller which can keep the system safe even when there is a perception failure. We demonstrate our framework on two scenarios in simulation, namely lane keeping and automatic braking, and show that it generates controllers that are safe, as well as a simpler model of a deep neural network-based perception system that can provide meaningful insight into operations of the perception system.
翻译:基于学习的认知系统最近的进展导致机器人系统(如自主飞行器和外科机器人)的性能大幅改善。然而,这些认知系统很难分析,其错误会传播,导致灾难性的失败。在本文中,我们考虑对依靠复杂认知模块进行反馈的机器人系统综合安全强控制器的问题。我们建议了一个反比指导综合框架,反复构建复杂的认知模块的简单替代模型,使我们能够找到安全的控制政策。框架使用一个假构来寻找反比样,或违反安全属性的系统痕迹,提取信息,以便能够对感知模块和错误进行高效的建模。然后,这些模型用来合成对感知错误具有强力的机器人系统控制器。如果由此产生的政策不安全,我们就会收集新的反比标。通过重复这一过程,我们最终会找到一个即使在感知失败时也能保持系统安全的控制器。我们在模拟中的两种情景上展示了我们的框架,即车道和自动制动,并显示它能够生成对感知模块进行安全感知力操作的两种情景,即车道和自动制动,并显示它能够产生对感知力系统进行安全感知力的系统。