Trajectory prediction is essential for autonomous vehicles (AVs) to plan correct and safe driving behaviors. While many prior works aim to achieve higher prediction accuracy, few study the adversarial robustness of their methods. To bridge this gap, we propose to study the adversarial robustness of data-driven trajectory prediction systems. We devise an optimization-based adversarial attack framework that leverages a carefully-designed differentiable dynamic model to generate realistic adversarial trajectories. Empirically, we benchmark the adversarial robustness of state-of-the-art prediction models and show that our attack increases the prediction error for both general metrics and planning-aware metrics by more than 50% and 37%. We also show that our attack can lead an AV to drive off road or collide into other vehicles in simulation. Finally, we demonstrate how to mitigate the adversarial attacks using an adversarial training scheme.
翻译:轨迹预测对于自主飞行器规划正确和安全的驾驶行为至关重要。 虽然许多先前的工作都是为了提高预测准确性, 但很少有人研究其方法的对抗性稳健性。 为了缩小这一差距, 我们提议研究数据驱动轨迹预测系统的对抗性稳健性。 我们设计了一个基于优化的对抗性攻击框架, 利用精心设计的不同动态模型来产生现实的对抗性轨迹。 随机地, 我们设定了最先进的预测模型的对抗性强性基准, 并表明我们的攻击使一般指标和计划性觉悟指标的预测误差增加50%以上和37%以上。 我们还表明,我们的攻击可以导致AV在模拟中开车或撞入其他车辆。 最后, 我们展示了如何使用对抗性训练计划来减轻对抗性攻击。