Trajectory prediction using deep neural networks (DNNs) is an essential component of autonomous driving (AD) systems. However, these methods are vulnerable to adversarial attacks, leading to serious consequences such as collisions. In this work, we identify two key ingredients to defend trajectory prediction models against adversarial attacks including (1) designing effective adversarial training methods and (2) adding domain-specific data augmentation to mitigate the performance degradation on clean data. We demonstrate that our method is able to improve the performance by 46% on adversarial data and at the cost of only 3% performance degradation on clean data, compared to the model trained with clean data. Additionally, compared to existing robust methods, our method can improve performance by 21% on adversarial examples and 9% on clean data. Our robust model is evaluated with a planner to study its downstream impacts. We demonstrate that our model can significantly reduce the severe accident rates (e.g., collisions and off-road driving).
翻译:使用深神经网络(DNN)进行轨迹预测是自主驱动(AD)系统的一个基本组成部分。然而,这些方法很容易受到对抗性攻击,导致碰撞等严重后果。在这项工作中,我们确定了两种关键要素,以防御对抗性攻击的轨迹预测模型,包括:(1) 设计有效的对抗性训练方法,(2) 增加特定领域的数据,以缓解清洁数据方面的性能退化。我们证明,与经过清洁数据培训的模型相比,我们的方法能够提高46%的对抗性数据性能,而清洁数据方面的性能退化只有3%。此外,与现有的强健方法相比,我们的方法可以提高21%的对抗性能,9%的清洁数据。我们用一个规划员来评估强性模型,以研究其下游影响。我们证明,我们的模型可以大幅降低严重的事故率(例如碰撞和越轨驱动 ) 。