Autonomous vehicles are expected to drive in complex scenarios with several independent non cooperating agents. Path planning for safely navigating in such environments can not just rely on perceiving present location and motion of other agents. It requires instead to predict such variables in a far enough future. In this paper we address the problem of multimodal trajectory prediction exploiting a Memory Augmented Neural Network. Our method learns past and future trajectory embeddings using recurrent neural networks and exploits an associative external memory to store and retrieve such embeddings. Trajectory prediction is then performed by decoding in-memory future encodings conditioned with the observed past. We incorporate scene knowledge in the decoding state by learning a CNN on top of semantic scene maps. Memory growth is limited by learning a writing controller based on the predictive capability of existing embeddings. We show that our method is able to natively perform multi-modal trajectory prediction obtaining state-of-the art results on three datasets. Moreover, thanks to the non-parametric nature of the memory module, we show how once trained our system can continuously improve by ingesting novel patterns.
翻译:自主飞行器预计将在复杂的情况下与几个独立的非合作代理人一起驾驶。 在这种环境中安全航行的路径规划不能仅仅依靠观察其他代理人的目前位置和运动。 它要求在远远的将来预测这些变量。 在本文中,我们处理利用记忆增强神经网络的多式联运轨迹预测问题。 我们的方法是利用经常性神经网络了解过去和未来的轨迹嵌,并利用关联的外部记忆储存和检索这种嵌入。 然后,通过解码以观察到的过去为条件的模拟未来编码进行轨迹预测。 我们把场景知识纳入解码状态,方法是在语义场景图上学习CNN。 记忆增长受到基于现有嵌入的预测能力的书写控制器的限制。 我们显示,我们的方法能够以本地方式进行多模式的轨迹预测,在三个数据集上取得最新的艺术结果。 此外,由于记忆模块的非参数性质,我们展示了我们曾经训练过的系统如何通过注入新模式不断改进。