Convolutional Neural Networks (CNN) for object detection, lane detection, and segmentation now sit at the head of most autonomy pipelines, and yet, their safety analysis remains an important challenge. Formal analysis of perception models is fundamentally difficult because their correctness is hard if not impossible to specify. We present a technique for inferring intelligible and safe abstractions for perception models from system-level safety requirements, data, and program analysis of the modules that are downstream from perception. The technique can help tradeoff safety, size, and precision, in creating abstractions and the subsequent verification. We apply the method to two significant case studies based on high-fidelity simulations (a) a vision-based lane keeping controller for an autonomous vehicle and (b) a controller for an agricultural robot. We show how the generated abstractions can be composed with the downstream modules and then the resulting abstract system can be verified using program analysis tools like CBMC. Detailed evaluations of the impacts of size, safety requirements, and the environmental parameters (e.g., lighting, road surface, plant type) on the precision of the generated abstractions suggest that the approach can help guide the search for corner cases and safe operating envelops.
翻译:用于物体探测、航道探测和断裂的进化神经网络(CNN)目前处于大多数自主输油管的顶端,但安全分析仍是一项重大挑战。对感知模型的正式分析基本上是困难的,因为它们的正确性即使不是不可能确定,也是困难的。我们提出了一个从系统安全要求、数据和下游模块程序分析中推断感知模型的可感知和安全抽取技术。该技术可以帮助权衡安全性、大小和精确性,创造抽象和随后的核查。我们对基于高不洁模拟的两个重要案例研究采用这种方法(a) 自主车辆的视像控制器和(b) 农业机器人控制器。我们展示了如何用下游模块组成生成的抽象模型,然后利用CBMC等程序分析工具对由此产生的抽象系统进行校验。对大小、安全要求和环境参数(例如照明、地面、植物类型)对生成的抽象精准度的精确度进行详细评估,表明该方法有助于指导对角落案例的搜索和操作。