As self-driving systems become better, simulating scenarios where the autonomy stack may fail becomes more important. Traditionally, those scenarios are generated for a few scenes with respect to the planning module that takes ground-truth actor states as input. This does not scale and cannot identify all possible autonomy failures, such as perception failures due to occlusion. In this paper, we propose AdvSim, an adversarial framework to generate safety-critical scenarios for any LiDAR-based autonomy system. Given an initial traffic scenario, AdvSim modifies the actors' trajectories in a physically plausible manner and updates the LiDAR sensor data to match the perturbed world. Importantly, by simulating directly from sensor data, we obtain adversarial scenarios that are safety-critical for the full autonomy stack. Our experiments show that our approach is general and can identify thousands of semantically meaningful safety-critical scenarios for a wide range of modern self-driving systems. Furthermore, we show that the robustness and safety of these systems can be further improved by training them with scenarios generated by AdvSim.
翻译:随着自动驾驶系统的不断进步,模拟其可能出现故障的场景变得越来越重要。传统上,这些场景仅针对规划模块的一些场景生成,以地面真实的行为者状态作为输入。这种方法缺乏可扩展性,也不能识别所有可能的自动化故障,例如由于遮挡引起的感知故障。本文提出了AdvSim,一种对于任何基于激光雷达的自动化系统生成关键安全场景的对抗性框架。给定一个初始的交通场景,AdvSim以物理上可行的方式修改参与者的轨迹,并更新激光雷达传感器数据以匹配扰动的世界。重要的是,通过直接从传感器数据进行仿真,我们可以获得对完整自动化堆栈而言是关键安全的对抗性场景。我们的实验表明,我们的方法具有通用性,并且可以为现代各种自动驾驶系统识别出数千个语义上有意义的关键安全场景。此外,我们还展示了使用AdvSim生成的场景来训练这些系统可以进一步提高其鲁棒性和安全性。