Deep Neural Networks (DNNs), despite their tremendous success in recent years, could still cast doubts on their predictions due to the intrinsic uncertainty associated with their learning process. Ensemble techniques and post-hoc calibrations are two types of approaches that have individually shown promise in improving the uncertainty calibration of DNNs. However, the synergistic effect of the two types of methods has not been well explored. In this paper, we propose a truth discovery framework to integrate ensemble-based and post-hoc calibration methods. Using the geometric variance of the ensemble candidates as a good indicator for sample uncertainty, we design an accuracy-preserving truth estimator with provably no accuracy drop. Furthermore, we show that post-hoc calibration can also be enhanced by truth discovery-regularized optimization. On large-scale datasets including CIFAR and ImageNet, our method shows consistent improvement against state-of-the-art calibration approaches on both histogram-based and kernel density-based evaluation metrics. Our codes are available at https://github.com/horsepurve/truly-uncertain.
翻译:深神经网络(DNN)近年来取得了巨大成功,但仍可能因其学习过程的内在不确定性而对其预测产生怀疑。集合技术和后热校准是两种单独显示在改进DNN的不确定性校准方面有希望的办法,但这两种方法的协同效应尚未得到很好探讨。在本文件中,我们提出了一个真相发现框架,以整合基于共同点和后热校准方法。我们用共同点候选人的几何差异作为样本不确定性的良好指标,设计了一个准确性估计真理的计算器,而没有准确性下降。此外,我们表明发现真相的正规化优化也可以加强事后校准。在包括CIFAR和图像网络在内的大规模数据集方面,我们的方法显示,在基于直方图和内核密度的评估指标方面,与州级的校准方法相比,我们的方法不断改进。我们的代码可以在 https://github.com/shospurve/tru-uncertain上查阅。