Generating safety-critical scenarios, which are crucial yet difficult to collect, provides an effective way to evaluate the robustness of autonomous driving systems. However, the diversity of scenarios and efficiency of generation methods are heavily restricted by the rareness and structure of safety-critical scenarios. Therefore, existing generative models that only estimate distributions from observational data are not satisfying to solve this problem. In this paper, we integrate causality as a prior into the scenario generation and propose a flow-based generative framework, Causal Autoregressive Flow (CausalAF). CausalAF encourages the generative model to uncover and follow the causal relationship among generated objects via novel causal masking operations instead of searching the sample only from observational data. By learning the cause-and-effect mechanism of how the generated scenario causes risk situations rather than just learning correlations from data, CausalAF significantly improves learning efficiency. Extensive experiments on three heterogeneous traffic scenarios illustrate that CausalAF requires much fewer optimization resources to effectively generate safety-critical scenarios. We also show that using generated scenarios as additional training samples empirically improves the robustness of autonomous driving algorithms.
翻译:生成安全临界情景至关重要,但难以收集,为评估自主驱动系统是否稳健提供了有效途径,但生成方法的假设情景和效率因安全关键情景的稀有性和结构而受到严重限制,因此,仅根据观测数据估计分布的现有基因化模型不能满足解决这一问题。在本文件中,我们将因果关系作为预设情景生成之前的一个因素,并提出一个基于流动的基因化框架,即 " causal自动递增流动 " (Causalal AF)。CausalAF鼓励基因化模型通过新的因果遮掩操作发现和跟踪生成的物体之间的因果关系,而不是只从观测数据中寻找样本。通过学习由因果机制了解生成的情景如何造成风险,而不是仅仅从数据中学习关联,CausalAF可大大提高学习效率。对三种不同交通情景的广泛实验表明,CausalAF需要更少优化资源来有效生成安全临界情景。我们还表明,利用生成的假想模型作为补充培训样本,从经验上改善自主驱动算法的可靠性。