Trajectory prediction is a critical component for autonomous vehicles (AVs) to perform safe planning and navigation. However, few studies have analyzed the adversarial robustness of trajectory prediction or investigated whether the worst-case prediction can still lead to safe planning. To bridge this gap, we study the adversarial robustness of trajectory prediction models by proposing a new adversarial attack that perturbs normal vehicle trajectories to maximize the prediction error. Our experiments on three models and three datasets show that the adversarial prediction increases the prediction error by more than 150%. Our case studies show that if an adversary drives a vehicle close to the target AV following the adversarial trajectory, the AV may make an inaccurate prediction and even make unsafe driving decisions. We also explore possible mitigation techniques via data augmentation and trajectory smoothing. The implementation is open source at https://github.com/zqzqz/AdvTrajectoryPrediction.
翻译:轨迹预测是自主飞行器进行安全规划和导航的一个关键组成部分,然而,很少有研究分析了轨迹预测的对抗性强度,或调查了最坏的预测是否仍然能够导致安全规划。为了缩小这一差距,我们研究了轨迹预测模型的对抗性强度,提出了新的对抗性攻击,干扰正常的车辆轨迹,以尽量扩大预测错误。我们在三个模型和三个数据集上的实验显示,对立性预测使预测错误增加150%以上。我们的案例研究显示,如果敌国在对抗性轨迹之后驾驶接近目标的车辆接近AV,AV可能会作出不准确的预测,甚至作出不安全的驾驶决定。我们还探索通过数据增强和滑动轨道来可能采用的减缓技术。实施方式在https://github.com/zqqzz/AdvTrajotory Praptrection。