Deep learning has been used to demonstrate end-to-end neural network learning for autonomous vehicle control from raw sensory input. While LiDAR sensors provide reliably accurate information, existing end-to-end driving solutions are mainly based on cameras since processing 3D data requires a large memory footprint and computation cost. On the other hand, increasing the robustness of these systems is also critical; however, even estimating the model's uncertainty is very challenging due to the cost of sampling-based methods. In this paper, we present an efficient and robust LiDAR-based end-to-end navigation framework. We first introduce Fast-LiDARNet that is based on sparse convolution kernel optimization and hardware-aware model design. We then propose Hybrid Evidential Fusion that directly estimates the uncertainty of the prediction from only a single forward pass and then fuses the control predictions intelligently. We evaluate our system on a full-scale vehicle and demonstrate lane-stable as well as navigation capabilities. In the presence of out-of-distribution events (e.g., sensor failures), our system significantly improves robustness and reduces the number of takeovers in the real world.
翻译:深层学习已被用于演示从原始感官输入的自控车辆的端到端神经网络学习; 虽然LIDAR传感器提供了可靠准确的信息, 现有的端到端驱动解决方案主要基于相机, 因为处理 3D 数据需要大量的记忆足迹和计算成本。 另一方面, 提高这些系统的坚固性也至关重要; 但是, 即使是估算模型的不确定性也非常困难, 因为取样方法的成本。 本文展示了一个高效而有力的基于LIDAR 的端到端导航框架。 我们首先引入了基于稀有的脉冲优化和硬件智能模型设计的快速LiDAR 网络。 我们随后提议采用混合识别聚合, 直接估计仅从一个前方传出的预测的不确定性, 然后将控制预测进行智能整合。 我们评估了我们的系统在全尺寸车辆上的系统, 并展示了车道可控性以及导航能力。 在存在分配外事件( 例如传感器故障), 我们的系统大大地改进了真实世界的接管次数。