Making safe and human-like decisions is an essential capability of autonomous driving systems and learning-based behavior planning is a promising pathway toward this objective. Distinguished from existing learning-based methods that directly output decisions, this work introduces a predictive behavior planning framework that learns to predict and evaluate from human driving data. The framework consists of three parts: a behavior generation module that produces a diverse set of candidate behaviors in the form of trajectory proposals, a conditional motion prediction network that predicts other agents' future trajectories based on each proposal, and a scoring module trained to properly evaluate the candidate plans using maximum entropy inverse reinforcement learning (IRL). We conduct comprehensive experiments to validate the proposed framework on a large-scale real-world urban driving dataset. The results show that the conditional prediction model can predict distinct and reasonable future trajectories given different trajectory proposals and the IRL-based scoring module can select plans that are close to human driving. The proposed framework outperforms other baseline methods in terms of similarity to human driving trajectories. Additionally, we find that the conditional prediction model improves both prediction and planning performance compared to the non-conditional model, and the learning of the scoring module is crucial for aligning the evaluations with human drivers.
翻译:与直接产出决定的现有基于学习的方法不同,这项工作引入了一个预测行为规划框架,从人类驱动数据中学会预测和评价。该框架由三部分组成:行为生成模块,以轨迹建议的形式产生一套不同的候选行为,一个有条件的运动预测网络,根据每个提案预测其他代理人的未来轨迹,一个经过培训的评分模块,以便利用最大反向强化学习(IRL)来适当评价候选人计划。我们进行全面实验,以验证关于大规模真实世界城市驱动数据集的拟议框架。结果显示,有条件预测模型可以预测不同轨迹建议中不同和合理的未来轨迹,而基于IRL的评分模块可以选择接近人类驱动的计划。拟议框架在与人类驱动轨迹相似方面优于其他基线方法。此外,我们发现,有条件预测模型改进了预测和规划业绩,与不成熟的模型相比,不完善的模型是学习和不完善的人类驱动力。