Reliable scene understanding is indispensable for modern autonomous systems. Current learning-based methods typically try to maximize their performance based on segmentation metrics that only consider the quality of the segmentation. However, for the safe operation of a system in the real world it is crucial to consider the uncertainty in the prediction as well. In this work, we introduce the novel task of uncertainty-aware panoptic segmentation, which aims to predict per-pixel semantic and instance segmentations, together with per-pixel uncertainty estimates. We define two novel metrics to facilitate its quantitative analysis, the uncertainty-aware Panoptic Quality (uPQ) and the panoptic Expected Calibration Error (pECE). We further propose the novel top-down Evidential Panoptic Segmentation Network (EvPSNet) to solve this task. Our architecture employs a simple yet effective probabilistic fusion module that leverages the predicted uncertainties. Additionally, we propose a new Lov\'asz evidential loss function to optimize the IoU for the segmentation utilizing the probabilities provided by deep evidential learning. Furthermore, we provide several strong baselines combining state-of-the-art panoptic segmentation networks with sampling-free uncertainty estimation techniques. Extensive evaluations show that our EvPSNet achieves the new state-of-the-art for the standard Panoptic Quality (PQ), as well as for our uncertainty-aware panoptic metrics.
翻译:现代自主系统离不开可靠的场景理解。当前基于学习的方法通常试图根据只考虑分解质量的分解度量标准优化其绩效。然而,对于现实世界中一个系统的安全运行而言,也有必要考虑预测中的不确定性。在这项工作中,我们引入了不确定性-能见泛光分解的新任务,其目的是预测每像素的语义和实例分解,以及每像素的不确定性估计。我们定义了两种新的衡量标准,以便利其定量分析,即不确定性-能见的全光质量(uPQ)和全光性预期校准错误(PECE )。为了解决这项任务,我们进一步建议采用新的自上而下的全光度分解网络(EvPSNet ) 。我们的结构采用了一个简单而有效的稳妥性混合模块,用以利用预测的不确定性。此外,我们提出了一个新的Lov\'asz 明显损失功能,以利用深度直观的全局性研究所提供的稳定性稳定性,优化IOU进行分解。此外,我们提供了一些可靠的全局性全局性分解评估基线,将我们的标准分解系统进行。