Deep learning based trajectory prediction methods rely on large amount of annotated future trajectories, but may not generalize well to a new scenario captured by another camera. Meanwhile, annotating trajectories for training a network for this new scenario is time-consuming and expensive, therefore it is desirable to adapt the model trained with the annotated source domain trajectories to the target domain. To tackle domain adaptation for trajectory prediction, we propose a Cross-domain Trajectory Prediction Network (CTP-Net), in which LSTMs are used to encode the observed trajectories of both domain, and their features are aligned by a cross-domain feature discriminator. Further, considering the consistency between the observed trajectories and the predicted trajectories in the target domain, a target domain offset discriminator is utilized to adversarially regularize the future trajectory predictions to be consistent with the observed trajectories. Extensive experiments demonstrate the effectiveness of the proposed domain adaptation for trajectory prediction setting as well as our method on domain adaptation for trajectory prediction.
翻译:深入学习的轨迹预测方法依赖于大量附带说明的未来轨迹,但可能不能很好地概括到另一摄像头所捕捉的新情景。同时,为这种新情景培训网络的轨迹说明耗时且昂贵,因此,最好将经过附加说明源域轨迹培训的模型调整到目标领域。为了解决轨道预测的域适应问题,我们提议建立一个跨域轨迹预测网络(CTP-Net),其中使用LSTMS对两种域的观察到的轨迹进行编码,其特征由跨界特征区分器加以校准。此外,考虑到所观察到的轨迹轨迹与目标领域的预测轨迹轨迹轨迹轨迹之间的一致性,将使用目标域抵消器对未来轨迹预测进行对抗性调整,使之与所观察到的轨迹预测相一致。广泛的实验表明拟议的轨迹预测域适应的有效性,以及我们对轨迹预测的域适应方法。