Accurate and robust trajectory prediction of neighboring agents is critical for autonomous vehicles traversing in complex scenes. Most methods proposed in recent years are deep learning-based due to their strength in encoding complex interactions. However, unplausible predictions are often generated since they rely heavily on past observations and cannot effectively capture the transient and contingency interactions from sparse samples. In this paper, we propose a hierarchical hybrid framework of deep learning (DL) and reinforcement learning (RL) for multi-agent trajectory prediction, to cope with the challenge of predicting motions shaped by multi-scale interactions. In the DL stage, the traffic scene is divided into multiple intermediate-scale heterogenous graphs based on which Transformer-style GNNs are adopted to encode heterogenous interactions at intermediate and global levels. In the RL stage, we divide the traffic scene into local sub-scenes utilizing the key future points predicted in the DL stage. To emulate the motion planning procedure so as to produce trajectory predictions, a Transformer-based Proximal Policy Optimization (PPO) incorporated with a vehicle kinematics model is devised to plan motions under the dominant influence of microscopic interactions. A multi-objective reward is designed to balance between agent-centric accuracy and scene-wise compatibility. Experimental results show that our proposal matches the state-of-the-arts on the Argoverse forecasting benchmark. It's also revealed by the visualized results that the hierarchical learning framework captures the multi-scale interactions and improves the feasibility and compliance of the predicted trajectories.
翻译:准确而稳健的邻近智能体轨迹预测对于穿越复杂场景的自主车辆至关重要。由于其在编码复杂交互方面的强大优势,近年来大多数提出的方法都是基于深度学习的。然而,它们往往依赖于过去的观察值,不能有效地捕捉稀疏样本中的瞬态和意外交互,因此会产生不切实际的预测结果。本文提出一种基于深度学习和强化学习的多智能体轨迹预测的分层混合框架,以应对多尺度交互塑造的运动预测难题。在深度学习阶段,基于车辆交通场景,将其划分为多个中间尺度的异构图形,然后采用Transformer型GNN对中间尺度和全局尺度的异构交互进行编码。在强化学习阶段,利用深度学习阶段预测的关键未来点将车辆交通场景分割为局部子场景,以模拟运动规划过程生成轨迹预测。设计了一种基于Transformer的Proximal Policy Optimization(PPO),并结合车辆运动学模型,在微观交互的主导影响下规划运动。设计了一个多目标奖励来平衡代理中心的准确性和场景兼容性。实验结果表明,本文提出的方法在Argoverse预测基准测试中匹配了最新研究水平。可视化结果显示,分层混合学习框架捕捉到了多尺度交互,提高了预测轨迹的可行性和合规性。