In this work, a novel learning-based approach has been developed to generate driving paths by integrating LIDAR point clouds, GPS-IMU information, and Google driving directions. The system is based on a fully convolutional neural network that jointly learns to carry out perception and path generation from real-world driving sequences and that is trained using automatically generated training examples. Several combinations of input data were tested in order to assess the performance gain provided by specific information modalities. The fully convolutional neural network trained using all the available sensors together with driving directions achieved the best MaxF score of 88.13% when considering a region of interest of 60x60 meters. By considering a smaller region of interest, the agreement between predicted paths and ground-truth increased to 92.60%. The positive results obtained in this work indicate that the proposed system may help fill the gap between low-level scene parsing and behavior-reflex approaches by generating outputs that are close to vehicle control and at the same time human-interpretable.
翻译:在这项工作中,开发了一种新的基于学习的方法,通过整合LIDAR点云、GPS-IMU信息以及Google驱动方向来创造驱动路径。这个系统基于一个完全进化的神经网络,这个网络共同学习从现实世界驱动序列中产生感知和路径,并使用自动生成的培训实例进行培训。测试了几种输入数据组合,以评估具体信息模式带来的绩效收益。利用所有现有传感器和驱动方向培训的完全进化的神经网络,在考虑一个60x60米感兴趣的区域时,达到了88.13%的最高最大MaxF分。考虑到一个较小的区域,预测路径和地面真相之间的协议增加到92.60%。这项工作取得的积极结果表明,拟议的系统可能有助于填补低层场面分割和行为反动方法之间的差距,产生接近车辆控制和同时人类解释的产出。