Estimating the uncertainty of a neural network plays a fundamental role in safety-critical settings. In perception for autonomous driving, measuring the uncertainty means providing additional calibrated information to downstream tasks, such as path planning, that can use it towards safe navigation. In this work, we propose a novel sampling-free uncertainty estimation method for object detection. We call it CertainNet, and it is the first to provide separate uncertainties for each output signal: objectness, class, location and size. To achieve this, we propose an uncertainty-aware heatmap, and exploit the neighboring bounding boxes provided by the detector at inference time. We evaluate the detection performance and the quality of the different uncertainty estimates separately, also with challenging out-of-domain samples: BDD100K and nuImages with models trained on KITTI. Additionally, we propose a new metric to evaluate location and size uncertainties. When transferring to unseen datasets, CertainNet generalizes substantially better than previous methods and an ensemble, while being real-time and providing high quality and comprehensive uncertainty estimates.
翻译:估计神经网络的不确定性在安全临界环境中起着根本作用。 在对自主驾驶的认知中,测量不确定性意味着为下游任务(例如路径规划)提供额外的经校准的信息,从而能够将其用于安全导航。在这项工作中,我们提出了一个新的无抽样的物体探测不确定性估计方法。我们称之为“CormNet”,这是第一个为每个输出信号提供单独不确定性的方法:对象性、等级、位置和大小。为了实现这一点,我们建议使用一种具有不确定性的热映射图,并利用探测器在推断时提供的相邻捆绑框。我们分别评估不同不确定性估计的检测性能和质量,同时使用具有挑战性的外部样本:BDD100K和带有经过KITTI培训模型的numags。此外,我们提出了评估位置和大小不确定性的新指标。在向不可见的数据集转移时,某些网络一般化方法大大优于以往的方法和组合,同时实时地提供高质量的全面不确定性估计。