In this paper, we present a novel end-to-end deep neural network model for autonomous driving that takes monocular image sequence as input, and directly generates the steering control angle. Firstly, we model the end-to-end driving problem as a local path planning process. Inspired by the environmental representation in the classical planning algorithms(i.e. the beam curvature method), pixel-wise orientations are fed into the network to learn direction-aware features. Next, to handle the imbalanced distribution of steering values in training datasets, we propose an improvement on a cost-sensitive loss function named SteeringLoss2. Besides, we also present a new end-to-end driving dataset, which provides corresponding LiDAR and image sequences, as well as standard driving behaviors. Our dataset includes multiple driving scenarios, such as urban, country, and off-road. Numerous experiments are conducted on both public available LiVi-Set and our own dataset, and the results show that the model using our proposed methods can predict steering angle accurately.
翻译:在本文中,我们展示了一个新的自主驱动端到端深神经网络模型,该模型将单视图像序列作为输入,并直接生成方向控制角度。 首先,我们将端到端驱动问题模型作为本地路径规划过程。受经典规划算法(即光束曲线法)中环境代表的启发,像素向导被输入到网络中学习方向认知特征。接下来,为了处理培训数据集中方向值分布不平衡的问题,我们建议改进成本敏感损失函数“指导Loss2 ” 。此外,我们还提出了一个新的端到端驱动数据集,提供相应的LIDAR和图像序列,以及标准驱动行为。我们的数据集包括多种驱动情景,如城市、国家和道路外。在公众可用的LiVi-Set和我们自己的数据集中进行了许多实验,结果显示,使用我们拟议方法的模型可以准确预测方向。