Vanilla models for object detection and instance segmentation suffer from the heavy bias toward detecting frequent objects in the long-tailed setting. Existing methods address this issue mostly during training, e.g., by re-sampling or re-weighting. In this paper, we investigate a largely overlooked approach -- post-processing calibration of confidence scores. We propose NorCal, Normalized Calibration for long-tailed object detection and instance segmentation, a simple and straightforward recipe that reweighs the predicted scores of each class by its training sample size. We show that separately handling the background class and normalizing the scores over classes for each proposal are keys to achieving superior performance. On the LVIS dataset, NorCal can effectively improve nearly all the baseline models not only on rare classes but also on common and frequent classes. Finally, we conduct extensive analysis and ablation studies to offer insights into various modeling choices and mechanisms of our approach. Our code is publicly available at https://github.com/tydpan/NorCal/.
翻译:用于物体探测和试样分解的香草模型在长尾鱼环境下严重偏向于探测常见物体。现有方法主要在培训期间处理这一问题,例如重新取样或重新加权。在本文中,我们调查了一种基本上被忽视的方法,即对信任分数进行后处理校准。我们提议对长尾鱼物体探测和试样分解进行NorCal、标准化校准,这是一种简单、直截了当的配方,按培训样本大小将每类的预测分数重新比重。我们显示,单独处理背景类并使每个提案的分数正常化是取得优异性表现的关键。在LVIS数据集中,NorCal可以有效地改进几乎所有基线模型,不仅在稀有类中,而且在常见类中。最后,我们进行了广泛的分析和对比研究,以提供我们方法的各种模型选择和机制的见解。我们的代码可在https://github.com/tydpan/NorCal/上公开查阅。