Autonomous driving is challenging in adverse road and weather conditions in which there might not be lane lines, the road might be covered in snow and the visibility might be poor. We extend the previous work on end-to-end learning for autonomous steering to operate in these adverse real-life conditions with multimodal data. We collected 28 hours of driving data in several road and weather conditions and trained convolutional neural networks to predict the car steering wheel angle from front-facing color camera images and lidar range and reflectance data. We compared the CNN model performances based on the different modalities and our results show that the lidar modality improves the performances of different multimodal sensor-fusion models. We also performed on-road tests with different models and they support this observation.
翻译:在可能没有车道的不利道路和天气条件下,自主驾驶具有挑战性,因为那里可能没有车道线,道路可能被雪覆盖,可见度可能很差;我们利用多式联运数据,扩大以前为在这些不利现实条件下运作而进行自主驾驶的端对端学习,以利用多式联运数据;我们收集了几个道路和天气条件下的28小时驾驶数据,并培训了进化神经网络,以预测车轮轮角度,从直观的彩色照相机图像和利达尔射程以及反射数据;我们比较了CNN模式基于不同模式的模型性能,我们的结果显示Lidar模式改善了不同多式联运传感聚合模型的性能;我们还用不同模型进行公路测试,支持这一观察。