As self-driving systems become better, simulating scenarios where the autonomy stack may fail becomes more important. Traditionally, those scenarios are generated for a few scenes with respect to the planning module that takes ground-truth actor states as input. This does not scale and cannot identify all possible autonomy failures, such as perception failures due to occlusion. In this paper, we propose AdvSim, an adversarial framework to generate safety-critical scenarios for any LiDAR-based autonomy system. Given an initial traffic scenario, AdvSim modifies the actors' trajectories in a physically plausible manner and updates the LiDAR sensor data to match the perturbed world. Importantly, by simulating directly from sensor data, we obtain adversarial scenarios that are safety-critical for the full autonomy stack. Our experiments show that our approach is general and can identify thousands of semantically meaningful safety-critical scenarios for a wide range of modern self-driving systems. Furthermore, we show that the robustness and safety of these systems can be further improved by training them with scenarios generated by AdvSim.
翻译:随着自我驱动系统的改善,模拟自动驱动系统可能失灵的假设情景变得更加重要。 传统上,这些假设情景是针对将地面真实性行为者状态作为投入的规划模块的几处场景产生的。 这没有规模,也不能辨别所有可能的自主失败,例如由于隔离而导致的认知失败。 在本论文中,我们提议AdvSim(AdvSim),这是一个为任何基于LIDAR的自主系统生成安全性临界情景的对立框架。在最初的交通状况下,AdvSim(AdvSim)以实际合理的方式改变行为方的轨迹,并更新LIDAR传感器数据,以适应被扰动的世界。 重要的是,通过直接模拟传感器数据,我们获得了对完整自主堆的安全至关重要的对立假设情景。 我们的实验表明,我们的方法是一般性的,能够为广泛的现代自我驱动系统确定数千个具有安全性意义的对等关键假设情景。 此外,我们表明,通过培训AdvSim(AdvSim)生成的情景,这些系统的坚固和安全性可以进一步改进。