Radar, the only sensor that could provide reliable perception capability in all weather conditions at an affordable cost, has been widely accepted as a key supplement to camera and LiDAR in modern advanced driver assistance systems (ADAS) and autonomous driving systems. Recent state-of-the-art works reveal that fusion of radar and LiDAR can lead to robust detection in adverse weather, such as fog. However, these methods still suffer from low accuracy of bounding box estimations. This paper proposes a bird's-eye view (BEV) fusion learning for an anchor box-free object detection system, which uses the feature derived from the radar range-azimuth heatmap and the LiDAR point cloud to estimate the possible objects. Different label assignment strategies have been designed to facilitate the consistency between the classification of foreground or background anchor points and the corresponding bounding box regressions. Furthermore, the performance of the proposed object detector can be further enhanced by employing a novel interactive transformer module. We demonstrated the superior performance of the proposed methods in this paper using the recently published Oxford Radar RobotCar (ORR) dataset. We showed that the accuracy of our system significantly outperforms the other state-of-the-art methods by a large margin.
翻译:雷达是唯一能够以负担得起的成本在所有天气条件下提供可靠感知能力的传感器,已被广泛接受为现代先进驱动协助系统和自主驱动系统对照相机和激光雷达系统的一种关键补充。最近的最先进的工程显示,雷达和激光雷达的融合可导致在恶劣天气(如雾)中进行强力探测。然而,这些方法仍然由于捆绑盒估计的精确度低而受到影响。本文件建议为无锚箱物体探测系统进行鸟眼观察(BEV)聚变学习,该系统使用雷达射程-氮热映和激光雷达点云来估计可能的物体。设计了不同的标签分配战略,以促进地面或背景锚点的分类与相应的捆绑盒回归的一致性。此外,采用新的交互式变压模块,可以进一步提高拟议物体探测器的性能。我们用最近出版的牛津雷达雷达机器人机器人(ORR)数据集展示了本文中拟议方法的优异性性。我们展示了我们系统大差的精度,我们展示了一种大差比差方法。