In the field of deep learning based computer vision, the development of deep object detection has led to unique paradigms (e.g., two-stage or set-based) and architectures (e.g., Faster-RCNN or DETR) which enable outstanding performance on challenging benchmark datasets. Despite this, the trained object detectors typically do not reliably assess uncertainty regarding their own knowledge, and the quality of their probabilistic predictions is usually poor. As these are often used to make subsequent decisions, such inaccurate probabilistic predictions must be avoided. In this work, we investigate the uncertainty calibration properties of different pretrained object detection architectures in a multi-class setting. We propose a framework to ensure a fair, unbiased, and repeatable evaluation and conduct detailed analyses assessing the calibration under distributional changes (e.g., distributional shift and application to out-of-distribution data). Furthermore, by investigating the influence of different detector paradigms, post-processing steps, and suitable choices of metrics, we deliver novel insights into why poor detector calibration emerges. Based on these insights, we are able to improve the calibration of a detector by simply finetuning its last layer.
翻译:在基于计算机的深层学习视野领域,深物体探测的开发导致了独特的范式(例如,两阶段或基于定置的)和结构(例如,更快的RCNN或DETR),使具有挑战性的基准数据集能够取得杰出的成绩。尽管如此,受过训练的物体探测器通常不可靠地评估其本身知识的不确定性,其概率预测的质量通常很差。由于这些通常用于随后作出决定,因此必须避免这种不准确的概率预测。在这项工作中,我们调查了多级环境中不同预先训练的物体探测结构的不确定性校准特性。我们提出了一个框架,以确保进行公平、公正和可重复的评价,并进行详细分析,评估分布变化下的校准(例如,分配转移和应用分配数据),此外,通过调查不同探测器范式、后处理步骤和适当的指标选择的影响,我们提供了新的洞察器校准为何出现差。根据这些洞察力,我们能够通过微的洞察力改进探测器的校准层。