Radar is an inevitable part of the perception sensor set for autonomous driving functions. It plays a gap-filling role to complement the shortcomings of other sensors in diverse scenarios and weather conditions. In this paper, we propose a Deep Neural Network (DNN) based end-to-end object detection and heading estimation framework using raw radar data. To this end, we approach the problem in both a Data-centric and model-centric manner. We refine the publicly available CARRADA dataset and introduce Bivariate norm annotations. Besides, the baseline model is improved by a transformer inspired cross-attention fusion and further center-offset maps are added to reduce localisation error. Our proposed model improves the detection mean Average Precision (mAP) by 5%, while reducing the model complexity by almost 23%. For comprehensive scene understanding purposes, we extend our model for heading estimation. The improved ground truth and proposed model is available at Github
翻译:雷达是自动驾驶功能感知传感器中一个不可避免的部分。 它可以填补空白, 补充不同情景和天气条件下其他传感器的缺点。 在本文中, 我们提议使用原始雷达数据, 以深神经网络为基础, 端对端物体探测和标题估计框架。 为此, 我们以数据为中心, 和模型为中心, 以数据为中心, 处理问题。 我们改进了公众可用的 CARRADA 数据集, 并引入了 Bivariate 规范说明。 此外, 基底模型通过一个驱动器激发交叉注意的聚合来改进, 并添加了更多的中位图来减少本地化错误。 我们提议的模型将探测平均值提高了5%, 同时将模型复杂性降低了近23%。 为了全面了解场景, 我们扩展了模型用于标题估计。 Github 提供了改进的地面真相和拟议模型。