Multi-modal fusion is a basic task of autonomous driving system perception, which has attracted many scholars' interest in recent years. The current multi-modal fusion methods mainly focus on camera data and LiDAR data, but pay little attention to the kinematic information provided by the bottom sensors of the vehicle, such as acceleration, vehicle speed, angle of rotation. These information are not affected by complex external scenes, so it is more robust and reliable. In this paper, we introduce the existing application fields of vehicle bottom information and the research progress of related methods, as well as the multi-modal fusion methods based on bottom information. We also introduced the relevant information of the vehicle bottom information data set in detail to facilitate the research as soon as possible. In addition, new future ideas of multi-modal fusion technology for autonomous driving tasks are proposed to promote the further utilization of vehicle bottom information.
翻译:多式融合是自主驾驶系统感知的基本任务,近年来吸引了许多学者的兴趣,目前的多式融合方法主要侧重于相机数据和激光雷达数据,但很少注意车辆底层传感器提供的动力学信息,如加速、车辆速度、旋转角度等,这些信息不受复杂的外部景象的影响,因此更加可靠和可靠。在本文件中,我们介绍了车辆底层信息的现有应用领域,相关方法的研究进展,以及基于底线信息的多式融合方法。我们还介绍了车辆底部信息数据集的相关信息,以尽快促进研究。此外,还提出了未来用于自主驾驶任务的多式融合技术的新想法,以促进进一步利用车辆底部信息。