Understanding the scene around the ego-vehicle is key to assisted and autonomous driving. Nowadays, this is mostly conducted using cameras and laser scanners, despite their reduced performances in adverse weather conditions. Automotive radars are low-cost active sensors that measure properties of surrounding objects, including their relative speed, and have the key advantage of not being impacted by rain, snow or fog. However, they are seldom used for scene understanding due to the size and complexity of radar raw data and the lack of annotated datasets. Fortunately, recent open-sourced datasets have opened up research on classification, object detection and semantic segmentation with raw radar signals using end-to-end trainable models. In this work, we propose several novel architectures, and their associated losses, which analyse multiple "views" of the range-angle-Doppler radar tensor to segment it semantically. Experiments conducted on the recent CARRADA dataset demonstrate that our best model outperforms alternative models, derived either from the semantic segmentation of natural images or from radar scene understanding, while requiring significantly fewer parameters. Both our code and trained models will be released.
翻译:了解自我车周围的景象是辅助和自主驾驶的关键。 如今, 大部分使用相机和激光扫描仪进行。 汽车雷达是低成本活性传感器,测量周围物体的特性, 包括相对速度, 并具有不受雨、 雪或雾影响的主要优势。 但是, 由于雷达原始数据的规模和复杂性以及缺少附加说明的数据集, 很少使用它们来了解景象。 幸运的是, 最近的开放源数据集打开了对分类、 物体探测和语义分解的研究, 以及使用端到端可训练模型的原始雷达信号。 在此工作中, 我们提议了一些新设计的建筑, 及其相关的损失, 分析阵形- 道普勒雷达的多重“ 视图 ”, 直至断层。 在最近的 CARRADA 数据集上进行的实验表明, 我们的最佳模型超越了从自然图像的语义分解或雷达场理解中获得的替代模型, 同时要求大大减少参数。 我们的代码和经过训练的模型都将被释放。