Vehicle trajectory prediction is nowadays a fundamental pillar of self-driving cars. Both the industry and research communities have acknowledged the need for such a pillar by running public benchmarks. While state-of-the-art methods are impressive, i.e., they have no off-road prediction, their generalization to cities outside of the benchmark is unknown. In this work, we show that those methods do not generalize to new scenes. We present a novel method that automatically generates realistic scenes that cause state-of-the-art models go off-road. We frame the problem through the lens of adversarial scene generation. We promote a simple yet effective generative model based on atomic scene generation functions along with physical constraints. Our experiments show that more than $60\%$ of the existing scenes from the current benchmarks can be modified in a way to make prediction methods fail (predicting off-road). We further show that (i) the generated scenes are realistic since they do exist in the real world, and (ii) can be used to make existing models robust by 30-40%. Code is available at https://s-attack.github.io/.
翻译:车辆轨迹预测是目前自行驾驶汽车的一个基本支柱。 产业界和研究界都通过运行公共基准认识到需要这样一个支柱。 虽然最先进的方法令人印象深刻,即没有越野预测,但尚不清楚这些方法向基准以外的城市的普及程度。 在这项工作中,我们显示这些方法没有向新的场景推广。我们提出了一个新颖的方法,自动生成现实的场景,从而导致最先进的模型脱轨。我们通过对抗性场景一代的镜头来界定问题。我们推广一个简单而有效的基于原子场景生成功能的基因模型,并加上物理限制。我们的实验显示,从目前基准中可以修改超过60美元的现有场景,这样可以使预测方法失败(预示离路 ) 。我们进一步显示,(一) 产生的场景是现实的,因为它们确实存在于现实世界中,并且(二) 可以用30-40 %的现有模型。 代码可在 https://s-referction.github.io/ 。