Predicting the future trajectory of a moving agent can be easy when the past trajectory continues smoothly but is challenging when complex interactions with other agents are involved. Recent deep learning approaches for trajectory prediction show promising performance and partially attribute this to successful reasoning about agent-agent interactions. However, it remains unclear which features such black-box models actually learn to use for making predictions. This paper proposes a procedure that quantifies the contributions of different cues to model performance based on a variant of Shapley values. Applying this procedure to state-of-the-art trajectory prediction methods on standard benchmark datasets shows that they are, in fact, unable to reason about interactions. Instead, the past trajectory of the target is the only feature used for predicting its future. For a task with richer social interaction patterns, on the other hand, the tested models do pick up such interactions to a certain extent, as quantified by our feature attribution method. We discuss the limits of the proposed method and its links to causality
翻译:当过去的轨迹保持平稳时,预测一个移动剂的未来轨迹是容易的,但当与其他物剂的复杂互动涉及时,预测未来轨迹则具有挑战性。最近关于轨迹预测的深层次学习方法显示有希望的绩效,并将其部分归因于对代理剂相互作用的成功推理。然而,仍然不清楚的是,这种黑盒模型究竟有哪些特点实际学会用于预测。本文件提议了一个程序,根据变式“变式”值对不同线索对模型性能的贡献进行量化。将这一程序应用于标准基准数据集中最先进的轨迹预测方法,表明它们事实上无法解释互动。相反,目标的以往轨迹是预测其未来的唯一特征。另一方面,对于具有较丰富社会互动模式的任务,所测试的模型的确在某种程度上吸收了这种互动,按我们的特性归属方法加以量化。我们讨论了拟议方法的局限性及其与因果关系。我们讨论了拟议方法的局限性及其与因果关系。