Perception systems for autonomous driving have seen significant advancements in their performance over last few years. However, these systems struggle to show robustness in extreme weather conditions because sensors like lidars and cameras, which are the primary sensors in a sensor suite, see a decline in performance under these conditions. In order to solve this problem, camera-radar fusion systems provide a unique opportunity for all weather reliable high quality perception. Cameras provides rich semantic information while radars can work through occlusions and in all weather conditions. In this work, we show that the state-of-the-art fusion methods perform poorly when camera input is degraded, which essentially results in losing the all-weather reliability they set out to achieve. Contrary to these approaches, we propose a new method, RadSegNet, that uses a new design philosophy of independent information extraction and truly achieves reliability in all conditions, including occlusions and adverse weather. We develop and validate our proposed system on the benchmark Astyx dataset and further verify these results on the RADIATE dataset. When compared to state-of-the-art methods, RadSegNet achieves a 27% improvement on Astyx and 41.46% increase on RADIATE, in average precision score and maintains a significantly better performance in adverse weather conditions
翻译:过去几年来,自主驾驶的感知系统在性能上取得了显著进步。然而,这些系统在极端天气条件下努力显示强健性强,因为作为传感器套中主要传感器的激光器和照相机等传感器在极端天气条件下的性能下降。为了解决这个问题,摄像-雷达聚变系统为所有天气可靠的高质量感知提供了独特的机会。相机提供丰富的语义信息,而雷达可以通过隔离和所有天气条件发挥作用。在这项工作中,我们表明,当摄像头投入退化时,最先进的聚合方法表现不佳,这基本上导致它们所要达到的全天候可靠性下降。与这些方法相反,我们提出了一种新的方法,即雷达系统,采用独立信息提取的新设计理念,在所有条件下真正实现可靠性,包括隐蔽和恶劣天气。我们开发并验证了我们提议的Astyx数据集基准系统,并进一步核实了RADIATE数据集的这些结果。与最新方法相比,RadSegNet在改进了全天候状态时,将改进率提高到27 %。