To plan safe maneuvers and act with foresight, autonomous vehicles must be capable of accurately predicting the uncertain future. In the context of autonomous driving, deep neural networks have been successfully applied to learning predictive models of human driving behavior from data. However, the predictions suffer from cascading errors, resulting in large inaccuracies over long time horizons. Furthermore, the learned models are black boxes, and thus it is often unclear how they arrive at their predictions. In contrast, rule-based models, which are informed by human experts, maintain long-term coherence in their predictions and are human-interpretable. However, such models often lack the sufficient expressiveness needed to capture complex real-world dynamics. In this work, we begin to close this gap by embedding the Intelligent Driver Model, a popular hand-crafted driver model, into deep neural networks. Our model's transparency can offer considerable advantages, e.g., in debugging the model and more easily interpreting its predictions. We evaluate our approach on a simulated merging scenario, showing that it yields a robust model that is end-to-end trainable and provides greater transparency at no cost to the model's predictive accuracy.
翻译:为了安全操作和预见地行动,自主车辆必须能够准确预测不确定的未来。在自主驾驶的情况下,深神经网络被成功地应用到从数据中学习人类驱动行为的预测模型上。然而,预测中存在层层错误,导致长期的不准确性。此外,所学模型是黑盒,因此往往不清楚它们是如何预测的。相比之下,以规则为基础的模型,由人类专家通报,在预测中保持长期一致性,而且是人为解释的。然而,这些模型往往缺乏足够的清晰度,无法捕捉复杂的现实世界动态。在这项工作中,我们开始缩小这一差距,将智能驱动模型这一受欢迎的手动驱动模型嵌入深层的神经网络。我们的模型的透明度可以提供相当大的优势,例如,在调试模型和更容易解释其预测方面。我们评估了我们在模拟合并设想中采用的方法,表明它产生一种可靠的模型,能够最终地预测出更高的透明度。