Evaluating the robustness of automated driving planners is a critical and challenging task. Although methodologies to evaluate vehicles are well established, they do not yet account for a reality in which vehicles with autonomous components share the road with adversarial agents. Our approach, based on probabilistic trust models, aims to help researchers assess the robustness of protections for machine learning-enabled planners against adversarial influence. In contrast with established practices that evaluate safety using the same evaluation dataset for all vehicles, we argue that adversarial evaluation fundamentally requires a process that seeks to defeat a specific protection. Hence, we propose that evaluations be based on estimating the difficulty for an adversary to determine conditions that effectively induce unsafe behavior. This type of inference requires precise statements about threats, protections, and aspects of planning decisions to be guarded. We demonstrate our approach by evaluating protections for planners relying on camera-based object detectors.
翻译:评估自动驾驶规划人员的稳健性是一项关键和具有挑战性的任务。虽然评价车辆的方法已经确立,但是尚未说明拥有自主部件的车辆与对抗代理人共用道路的现实。我们基于概率信任模式采取的方法旨在帮助研究人员评估保护机能学习规划人员免受对抗性影响的能力。与使用所有车辆相同的评价数据集评估安全性的既定做法相反,我们争辩说,对抗性评价从根本上需要一种旨在挫败特定保护的程序。因此,我们提议,评价应基于对对手确定有效诱发不安全行为的条件的困难的估计。这种推断要求准确说明威胁、保护和规划决定的各个方面。我们通过评价对依赖基于相机的物体探测器的规划人员的保护,来表明我们的方法。