In autonomous driving, using a variety of sensors to recognize preceding vehicles in middle and long distance is helpful for improving driving performance and developing various functions. However, if only LiDAR or camera is used in the recognition stage, it is difficult to obtain necessary data due to the limitations of each sensor. In this paper, we proposed a method of converting the tracking data of vision into bird's eye view (BEV) coordinates using an equation that projects LiDAR points onto an image, and a method of fusion between LiDAR and vision tracked data. Thus, the newly proposed method was effective through the results of detecting closest in-path vehicle (CIPV) in various situations. In addition, even when experimenting with the EuroNCAP autonomous emergency braking (AEB) test protocol using the result of fusion, AEB performance is improved through improved cognitive performance than when using only LiDAR. In experimental results, the performance of the proposed method was proved through actual vehicle tests in various scenarios. Consequently, it is convincing that the newly proposed sensor fusion method significantly improves the ACC function in autonomous maneuvering. We expect that this improvement in perception performance will contribute to improving the overall stability of ACC.
翻译:在自主驾驶中,使用各种传感器识别中长距离前车辆有助于改进驾驶性能和开发各种功能,然而,如果只在识别阶段使用激光雷达或照相机,则由于每个传感器的局限性,很难获得必要的数据。在本文件中,我们提出一种方法,将视觉跟踪数据转换成鸟眼视图(BEV)坐标,使用一个方程式,将激光雷达的点点投到图像上,以及激光雷达和视觉跟踪数据之间的混集方法。因此,新提议的方法通过在各种情况下探测最接近的内向飞行器(CIPV)的结果是有效的。此外,即使利用聚变结果试验欧洲核电机自动紧急制动(AEB)测试协议时,通过提高认知性能,而不是仅仅使用激光雷达来改进。在实验结果中,通过各种情景中实际的车辆测试证明拟议方法的性能。因此,新提议的传感器混集法可以令人信服地大大改进行政协调会在自主操纵方面的功能。我们期望,即使利用聚变相性能的这一改进将有助于提高ACC的总体稳定性。