Effective feature-extraction is critical to models' contextual understanding, particularly for applications to robotics and autonomous driving, such as multimodal trajectory prediction. However, state-of-the-art generative methods face limitations in representing the scene context, leading to predictions of inadmissible futures. We alleviate these limitations through the use of self-attention, which enables better control over representing the agent's social context; we propose a local feature-extraction pipeline that produces more salient information downstream, with improved parameter efficiency. We show improvements on standard metrics (minADE, minFDE, DAO, DAC) over various baselines on the Argoverse dataset. We release our code at: https://github.com/Manojbhat09/Trajformer
翻译:有效的特征排除对于模型的背景理解至关重要,特别是对于对机器人和自主驱动的应用,例如多式联运轨迹预测,对于模型的背景理解,特别是对于对机器人和自主驱动的应用而言,有效的特征排除至关重要。然而,最先进的基因化方法在代表场景时面临局限性,从而导致对不可接受的未来的预测。我们通过使用自我注意来缓解这些局限性,从而能够更好地控制代理商的社会背景;我们提议了一种本地特征排除管道,在下游产生更突出的信息,提高参数效率。我们展示了在Argoverse数据集的各种基线上的标准指标(minADE、minFDE、DAO、发援会)的改进。我们发布了我们的代码,网址是:https://github.com/Manojbhat09/Trajexw。