Deploying deep neural networks (DNNs) as core functions in autonomous driving creates unique verification and validation challenges. In particular, the continuous engineering paradigm of gradually perfecting a DNN-based perception can make the previously established result of safety verification no longer valid. This can occur either due to the newly encountered examples (i.e., input domain enlargement) inside the Operational Design Domain or due to the subsequent parameter fine-tuning activities of a DNN. This paper considers approaches to transfer results established in the previous DNN safety verification problem to the modified problem setting. By considering the reuse of state abstractions, network abstractions, and Lipschitz constants, we develop several sufficient conditions that only require formally analyzing a small part of the DNN in the new problem. The overall concept is evaluated in a $1/10$-scaled vehicle that equips a DNN controller to determine the visual waypoint from the perceived image.
翻译:将深神经网络(DNN)作为自主驱动的核心职能部署,这带来了独特的核查和验证挑战,特别是,逐步完善DNN概念的连续工程范式可以使先前确定的安全核查结果不再有效,这要么是由于在操作设计域内新遇到的例子(即输入域扩大),要么是由于DNN随后的参数微调活动。本文考虑了将先前DNN安全核查问题确定的结果转移到修改的问题设置的方法。我们考虑到国家抽取、网络抽取和Lipschitz常数的再利用,我们制定了若干充分的条件,只需要正式分析DNN在新问题上的一小部分。总体概念用一台1/10美元规模的车辆进行评估,该车辆将配置DNN控制器,以便从所看到的形象中确定直观的路径。