With the rapid development of intelligent vehicles and Advanced Driver-Assistance Systems (ADAS), a new trend is that mixed levels of human driver engagements will be involved in the transportation system. Therefore, necessary visual guidance for drivers is vitally important under this situation to prevent potential risks. To advance the development of visual guidance systems, we introduce a novel vision-cloud data fusion methodology, integrating camera image and Digital Twin information from the cloud to help intelligent vehicles make better decisions. Target vehicle bounding box is drawn and matched with the help of the object detector (running on the ego-vehicle) and position information (received from the cloud). The best matching result, a 79.2% accuracy under 0.7 intersection over union threshold, is obtained with depth images served as an additional feature source. A case study on lane change prediction is conducted to show the effectiveness of the proposed data fusion methodology. In the case study, a multi-layer perceptron algorithm is proposed with modified lane change prediction approaches. Human-in-the-loop simulation results obtained from the Unity game engine reveal that the proposed model can improve highway driving performance significantly in terms of safety, comfort, and environmental sustainability.
翻译:随着智能车辆和高级助运系统(ADAS)的迅速发展,一个新的趋势是,运输系统将涉及不同程度的载人驾驶员,因此,在这种情况下,对驾驶员的必要视觉指导至关重要,以防止潜在风险。为了推动视觉指导系统的发展,我们采用了新型的视觉环状数据聚合方法,将云层的相机图像和数字双人信息整合起来,以帮助智能车辆作出更好的决定。目标车辆捆绑盒被绘制,并与物体探测器(自驾驶车)和位置信息(从云中接收)的帮助相匹配。最佳匹配的结果是,在0.7的交界点下获得79.2%的精度,深度图像作为额外的特征来源。进行了关于航道变化预测的案例研究,以显示拟议数据集成方法的有效性。在案例研究中,提议采用修改的车道变化预测方法,采用多层透镜算法。从Unity游戏引擎获得的模拟结果显示,拟议的模型可以在安全、舒适和环境可持续性方面大大改进公路的驾驶性。