As self-driving systems become better, simulating scenarios where the autonomy stack is likely to fail becomes of key importance. Traditionally, those scenarios are generated for a few scenes with respect to the planning module that takes ground-truth actor states as input. This does not scale and cannot identify all possible autonomy failures, such as perception failures due to occlusion. In this paper, we propose AdvSim, an adversarial framework to generate safety-critical scenarios for any LiDAR-based autonomy system. Given an initial traffic scenario, AdvSim modifies the actors' trajectories in a physically plausible manner and updates the LiDAR sensor data to create realistic observations of the perturbed world. Importantly, by simulating directly from sensor data, we obtain adversarial scenarios that are safety-critical for the full autonomy stack. Our experiments show that our approach is general and can identify thousands of semantically meaningful safety-critical scenarios for a wide range of modern self-driving systems. Furthermore, we show that the robustness and safety of these autonomy systems can be further improved by training them with scenarios generated by AdvSim.
翻译:随着自我驱动系统的改善,模拟自动驱动系统可能失灵的假设情景变得至关重要。传统上,这些假设情景是针对将地面真实行为方状态作为投入的规划模块的少数场景产生的。这没有规模,也不能辨别所有可能的自主失败,例如由于隔离而导致的认知失败。在本文中,我们提议AdvSim,这是一个为任何基于LIDAR的自主系统生成安全临界情景的对立框架。在最初的交通情景下,AdvSim以实际合理的方式改变行为者的轨迹,并更新LIDAR传感器数据,以创造对周遭世界的现实观察。重要的是,通过直接模拟传感器数据,我们获得了对完整自主堆的安全至关重要的对抗情景。我们的实验表明,我们的方法是一般性的,可以确定数千个具有内在意义的安全临界情景,用于广泛的现代自我驱动系统。此外,我们表明,通过培训AdvSim生成的情景,可以进一步改进这些自主系统的坚固性和安全性。