Accident prediction and timely warnings play a key role in improving road safety by reducing the risk of injury to road users and minimizing property damage. Advanced Driver Assistance Systems (ADAS) are designed to support human drivers and are especially useful when they can anticipate potential accidents before they happen. While many existing systems depend on a range of sensors such as LiDAR, radar, and GPS, relying solely on dash-cam video input presents a more challenging but a more cost-effective and easily deployable solution. In this work, we incorporate better spatio-temporal features and aggregate them through a recurrent network to improve upon state-of-the-art graph neural networks for predicting accidents from dash-cam videos. Experiments using three publicly available datasets show that our proposed STAGNet model achieves higher average precision and mean time-to-collision values than previous methods, both when cross-validated on a given dataset and when trained and tested on different datasets.
翻译:事故预测与及时预警在提升道路安全方面发挥着关键作用,可降低道路使用者受伤风险并减少财产损失。高级驾驶辅助系统(ADAS)旨在为人类驾驶员提供支持,当其能在潜在事故发生前进行预警时尤为有效。尽管现有许多系统依赖于激光雷达、雷达和GPS等多种传感器,但仅依靠行车记录仪视频输入提供了一种更具挑战性、但更经济高效且易于部署的解决方案。本研究通过整合更优的时空特征,并利用循环网络对其进行聚合,改进了基于行车记录仪视频预测事故的先进图神经网络方法。在三个公开数据集上的实验表明,无论是在给定数据集上进行交叉验证,还是在不同数据集上进行训练和测试,我们提出的STAGNet模型均取得了比先前方法更高的平均精度和平均碰撞时间值。