Our field has recently witnessed an arms race of neural network-based trajectory predictors. While these predictors are at the core of many applications such as autonomous navigation or pedestrian flow simulations, their adversarial robustness has not been carefully studied. In this paper, we introduce a socially-attended attack to assess the social understanding of prediction models in terms of collision avoidance. An attack is a small yet carefully-crafted perturbations to fail predictors. Technically, we define collision as a failure mode of the output, and propose hard- and soft-attention mechanisms to guide our attack. Thanks to our attack, we shed light on the limitations of the current models in terms of their social understanding. We demonstrate the strengths of our method on the recent trajectory prediction models. Finally, we show that our attack can be employed to increase the social understanding of state-of-the-art models. The code is available online: https://s-attack.github.io/
翻译:我们的田野最近目睹了以神经网络为基础的轨迹预测器的军备竞赛。 虽然这些预测器是自主导航或行人流动模拟等许多应用的核心,但它们的对抗性强健性没有经过仔细研究。 在本文中,我们引入了社会参与的攻击来评估社会对避免碰撞的预测模型的理解。攻击是一种小但精心策划的干扰,使预测器失灵。技术上,我们把碰撞定义为产出的失败模式,并提出了指导我们攻击的硬性和软性注意机制。由于我们的攻击,我们揭示了当前模型在社会理解方面的局限性。我们展示了我们在最近轨迹预测模型上的方法的优点。最后,我们展示了我们的攻击可以用来提高社会对状态-艺术模型的理解。代码可在网上查阅:https://s-action.github.io/。