Fully autonomous driving systems require fast detection and recognition of sensitive objects in the environment. In this context, intelligent vehicles should share their sensor data with computing platforms and/or other vehicles, to detect objects beyond their own sensors' fields of view. However, the resulting huge volumes of data to be exchanged can be challenging to handle for standard communication technologies. In this paper, we evaluate how using a combination of different sensors affects the detection of the environment in which the vehicles move and operate. The final objective is to identify the optimal setup that would minimize the amount of data to be distributed over the channel, with negligible degradation in terms of object detection accuracy. To this aim, we extend an already available object detection algorithm so that it can consider, as an input, camera images, LiDAR point clouds, or a combination of the two, and compare the accuracy performance of the different approaches using two realistic datasets. Our results show that, although sensor fusion always achieves more accurate detections, LiDAR only inputs can obtain similar results for large objects while mitigating the burden on the channel.
翻译:完全自主的驱动系统要求快速探测和识别环境中的敏感物体。在这方面,智能车辆应当与计算机平台和(或)其他车辆共享其传感器数据,以探测超出其传感器范围以外的物体。然而,由此而交换的大量数据对于标准通信技术而言可能具有挑战性。在本文件中,我们评估使用不同传感器的组合如何影响对车辆移动和运行环境的探测。最终目标是确定最佳设置,最大限度地减少在频道上分布的数据数量,在物体探测精确度方面可忽略不计的降解。为此,我们扩展了已有的天体探测算法,以便作为摄像头图像、激光雷达点云或两者的组合考虑,并用两种现实的数据集比较不同方法的准确性表现。我们的结果显示,虽然传感器的融合总是能取得更准确的探测,但激光雷达只能为大型物体获得类似的结果,同时减轻频道的负担。