Driver process models play a central role in the testing, verification, and development of automated and autonomous vehicle technologies. Prior models developed from control theory and physics-based rules are limited in automated vehicle applications due to their restricted behavioral repertoire. Data-driven machine learning models are more capable than rule-based models but are limited by the need for large training datasets and their lack of interpretability, i.e., an understandable link between input data and output behaviors. We propose a novel car following modeling approach using active inference, which has comparable behavioral flexibility to data-driven models while maintaining interpretability. We assessed the proposed model, the Active Inference Driving Agent (AIDA), through a benchmark analysis against the rule-based Intelligent Driver Model, and two neural network Behavior Cloning models. The models were trained and tested on a real-world driving dataset using a consistent process. The testing results showed that the AIDA predicted driving controls significantly better than the rule-based Intelligent Driver Model and had similar accuracy to the data-driven neural network models in three out of four evaluations. Subsequent interpretability analyses illustrated that the AIDA's learned distributions were consistent with driver behavior theory and that visualizations of the distributions could be used to directly comprehend the model's decision making process and correct model errors attributable to limited training data. The results indicate that the AIDA is a promising alternative to black-box data-driven models and suggest a need for further research focused on modeling driving style and model training with more diverse datasets.
翻译:驾驶过程模型在测试、验证和发展自动化和自主车辆技术中发挥着核心作用。由控制理论和基于物理规则开发的先前模型在自动化车辆应用中受到限制,因其行为表现有限。数据驱动的机器学习模型比基于规则的模型更有能力,但受到需要大数据训练集和缺乏可解释性的限制,即输入数据和输出行为之间的可理解链接。我们提出了一种新颖的基于主动推理的车辆跟驰建模方法,其行为灵活性与数据驱动模型相当,同时保持可解释性。我们通过基准分析评估了所提出的模型——主动推理驾驶智能体(AIDA),并将其与基于规则的智能驾驶模型以及两个神经网络行为克隆模型进行了比较。这些模型在真实驾驶数据集上进行了训练和测试。测试结果表明,与基于规则的智能驾驶模型相比,AIDA能够更好地预测驾驶控制,并且在四次评估中有三次的准确度与基于数据驱动的神经网络模型相似。随后的可解释性分析说明AIDA的学习分布与驾驶行为理论一致,分布的可视化可以用于直接理解模型的决策过程并纠正由于训练数据有限导致的模型误差。结果表明,AIDA是一个有前景的替代黑盒数据驱动模型的选择,并提示需要进一步研究以便模拟不同样本数据训练的驾驶风格。