Generating confidence calibrated outputs is of utmost importance for the applications of deep neural networks in safety-critical decision-making systems. The output of a neural network is a probability distribution where the scores are estimated confidences of the input belonging to the corresponding classes, and hence they represent a complete estimate of the output likelihood relative to all classes. In this paper, we propose a novel form of label smoothing to improve confidence calibration. Since different classes are of different intrinsic similarities, more similar classes should result in closer probability values in the final output. This motivates the development of a new smooth label where the label values are based on similarities with the reference class. We adopt different similarity measurements, including those that capture feature-based similarities or semantic similarity. We demonstrate through extensive experiments, on various datasets and network architectures, that our approach consistently outperforms state-of-the-art calibration techniques including uniform label smoothing.
翻译:生成信任校准产出对于深神经网络在安全关键决策系统中的应用至关重要。神经网络的输出是一种概率分布,其分数是属于相应分类的投入的估计信任度,因此它们代表了相对于所有类别产出可能性的完整估计。在本文中,我们提出了一种新颖的标签平滑形式,以改善信任校准。由于不同类别具有不同的内在相似性,更相似的类别应导致最终产出的更接近概率值。这促使开发一个新的光滑标签,标签值基于与参考类的相似性。我们采用了不同的相似性测量方法,包括基于特征的相似性或语义相似性。我们通过广泛的实验,在各种数据集和网络结构上证明,我们的方法一贯优于包括统一标签平滑的艺术校准技术。