Path-tracking control of self-driving vehicles can benefit from deep learning for tackling longstanding challenges such as nonlinearity and uncertainty. However, deep neural controllers lack safety guarantees, restricting their practical use. We propose a new approach of learning almost-barrier functions, which approximately characterizes the forward invariant set for the system under neural controllers, to quantitatively analyze the safety of deep neural controllers for path-tracking. We design sampling-based learning procedures for constructing candidate neural barrier functions, and certification procedures that utilize robustness analysis for neural networks to identify regions where the barrier conditions are fully satisfied. We use an adversarial training loop between learning and certification to optimize the almost-barrier functions. The learned barrier can also be used to construct online safety monitors through reachability analysis. We demonstrate effectiveness of our methods in quantifying safety of neural controllers in various simulation environments, ranging from simple kinematic models to the TORCS simulator with high-fidelity vehicle dynamics simulation.
翻译:深度神经控制器缺乏安全保障,限制其实际使用。我们建议采用一种新的方法,学习几乎屏障功能,这种功能可以大致描述神经控制器为系统设置的前方变异功能,对深度神经控制器的安全进行定量分析,以便跟踪路径。我们设计用于构建候选神经屏障功能的基于取样的学习程序,以及使用神经网络强力分析来确定障碍条件完全满足的区域的认证程序。我们使用学习和认证之间的对抗性培训循环来优化几乎屏障功能。我们还可以使用所学的障碍通过可达性分析来建立在线安全监测器。我们展示了在各种模拟环境中对神经控制器安全进行量化的方法的有效性,从简单的运动模型到具有高纤维性车辆动态模拟的TORCS模拟器。