Trajectory prediction is a fundamental problem and challenge for autonomous vehicles. Early works mainly focused on designing complicated architectures for deep-learning-based prediction models in normal-illumination environments, which fail in dealing with low-light conditions. This paper proposes a novel approach for trajectory prediction in low-illumination scenarios by leveraging multi-stream information fusion, which flexibly integrates image, optical flow, and object trajectory information. The image channel employs Convolutional Neural Network (CNN) and Long Short-term Memory (LSTM) networks to extract temporal information from the camera. The optical flow channel is applied to capture the pattern of relative motion between adjacent camera frames and modelled by Spatial-Temporal Graph Convolutional Network (ST-GCN). The trajectory channel is used to recognize high-level interactions between vehicles. Finally, information from all the three channels is effectively fused in the prediction module to generate future trajectories of surrounding vehicles in low-illumination conditions. The proposed multi-channel graph convolutional approach is validated on HEV-I and newly generated Dark-HEV-I, egocentric vision datasets that primarily focus on urban intersection scenarios. The results demonstrate that our method outperforms the baselines, in standard and low-illumination scenarios. Additionally, our approach is generic and applicable to scenarios with different types of perception data. The source code of the proposed approach is available at https://github.com/TommyGong08/MSIF}{https://github.com/TommyGong08/MSIF.
翻译:早期工作主要侧重于在正常光照环境下设计基于深层学习的预测模型的复杂结构,这些模型无法应对低光度条件。本文件提出在低光度情景下进行低光度情景轨迹预测的新办法,办法是利用多流信息聚合,灵活整合图像、光学流和物体轨迹信息。图像频道使用革命神经网络(CNN)和长期短期内存(LSTM)网络从相机中提取时间信息。光学流通道用于捕捉相邻摄像框架之间以及由空间-时空图革命网络(ST-GCN)模拟的相对运动模式。轨迹频道用于识别车辆之间高光度互动。最后,所有三个渠道的信息都有效地结合到预测模块中,以在低光度条件下生成周围车辆的未来轨迹。拟议的多光学图形感光学方法在高光流-I和新生成的 Dark-HEV-I、利心视觉-视觉-ILV-G-DVMS-G-DL 数据集图案,主要侧重于城市交叉设想方案。