Human-vehicle cooperative driving has become the critical technology of autonomous driving, which reduces the workload of human drivers. However, the complex and uncertain road environments bring great challenges to the visual perception of cooperative systems. And the perception characteristics of autonomous driving differ from manual driving a lot. To enhance the visual perception capability of human-vehicle cooperative driving, this paper proposed a cooperative visual perception model. 506 images of complex road and traffic scenarios were collected as the data source. Then this paper improved the object detection algorithm of autonomous vehicles. The mean perception accuracy of traffic elements reached 75.52%. By the image fusion method, the gaze points of human drivers were fused with vehicles' monitoring screens. Results revealed that cooperative visual perception could reflect the riskiest zone and predict the trajectory of conflict objects more precisely. The findings can be applied in improving the visual perception algorithms and providing accurate data for planning and control.
翻译:人类车辆合作驾驶已成为自主驾驶的关键技术,从而减少了人驾驶员的工作量;然而,复杂和不确定的道路环境给对合作系统的视觉感觉带来了巨大的挑战;自主驾驶的认知特征与人工驾驶有很大不同;为了提高人车辆合作驾驶的视觉认知能力,本文件提出了一个合作视觉认知模型;收集了506幅复杂的道路和交通情景图像作为数据来源;随后,本文件改进了自主车辆的物体检测算法;交通要素的中值认知精确度达到了75.52%;通过图像聚合方法,人类驾驶员的视点与车辆监测屏幕相融合;结果显示,合作视觉视角可以反映最危险的区域,更准确地预测冲突物体的轨迹;研究结果可用于改进视觉认知算法和为规划和控制提供准确的数据。