Context plays a significant role in the generation of motion for dynamic agents in interactive environments. This work proposes a modular method that utilises a learned model of the environment for motion prediction. This modularity explicitly allows for unsupervised adaptation of trajectory prediction models to unseen environments and new tasks by relying on unlabelled image data only. We model both the spatial and dynamic aspects of a given environment alongside the per agent motions. This results in more informed motion prediction and allows for performance comparable to the state-of-the-art. We highlight the model's prediction capability using a benchmark pedestrian prediction problem and a robot manipulation task and show that we can transfer the predictor across these tasks in a completely unsupervised way. The proposed approach allows for robust and label efficient forward modelling, and relaxes the need for full model re-training in new environments.
翻译:在互动环境中,环境在产生动态物剂的运动中起着重要作用。 这项工作提出了一个模块化方法, 利用一个学习的环境模型来进行运动预测。 这种模块化方法明确允许将轨迹预测模型不经监督地根据无形环境和新任务进行调整, 仅依靠未贴标签的图像数据。 我们将特定环境的空间和动态方面与每个物剂的动作进行模拟, 从而产生更知情的运动预测, 并允许与最新工艺相比的性能。 我们用基准行人预测问题和机器人操纵任务来突出模型的预测能力, 并表明我们可以以完全不受监督的方式将预测器传输到这些任务之间。 拟议的方法允许以稳健和贴上高效的前方建模, 并放松在新环境中全面进行模型再培训的必要性 。