Deep learning (DL) has recently attracted increasing interest to improve object type classification for automotive radar.In addition to high accuracy, it is crucial for decision making in autonomous vehicles to evaluate the reliability of the predictions; however, decisions of DL networks are non-transparent. Current DL research has investigated how uncertainties of predictions can be quantified, and in this article, we evaluate the potential of these methods for safe, automotive radar perception. In particular we evaluate how uncertainty quantification can support radar perception under (1) domain shift, (2) corruptions of input signals, and (3) in the presence of unknown objects. We find that in agreement with phenomena observed in the literature,deep radar classifiers are overly confident, even in their wrong predictions. This raises concerns about the use of the confidence values for decision making under uncertainty, as the model fails to notify when it cannot handle an unknown situation. Accurate confidence values would allow optimal integration of multiple information sources, e.g. via sensor fusion. We show that by applying state-of-the-art post-hoc uncertainty calibration, the quality of confidence measures can be significantly improved,thereby partially resolving the over-confidence problem. Our investigation shows that further research into training and calibrating DL networks is necessary and offers great potential for safe automotive object classification with radar sensors.
翻译:深度学习(DL)最近吸引了越来越多的兴趣来改进汽车雷达的物体类型分类。除了高度精确外,对于自主车辆的决策来说,评估预测可靠性至关重要;然而,DL网络的决定不透明。目前的DL研究调查了预测的不确定性如何量化,在本篇文章中,我们评估了这些方法对于安全、汽车雷达感知的潜力。特别是我们评估了不确定性量化如何支持雷达在(1)域变换、(2)输入信号腐败和(3)存在未知物体的情况下的雷达感知。我们发现,在与文献中观察到的现象达成一致时,深层雷达分类人员过于自信,甚至对其错误的预测也过于自信。这引起了人们对于在不确定情况下使用信心值作出决定的关切,因为模型无法在无法处理未知情况时无法通知这些方法对于安全、汽车雷达感知度的潜力。准确的信任值将允许通过传感器聚合等多种信息来源的最佳整合。我们表明,通过应用最新技术的后度不确定性校准,信任措施的质量可以大大改进,从而部分地解决了对目标进行校准的网络和雷达提供的巨大风险。我们的调查显示,在进行必要的校准时,在进行必要的校准时,在进行必要的校准方面,我们的研究和雷达上展示了必要的研究。