Confidence estimation, a task that aims to evaluate the trustworthiness of the model's prediction output during deployment, has received lots of research attention recently, due to its importance for the safe deployment of deep models. Previous works have outlined two important qualities that a reliable confidence estimation model should possess, i.e., the ability to perform well under label imbalance and the ability to handle various out-of-distribution data inputs. In this work, we propose a meta-learning framework that can simultaneously improve upon both qualities in a confidence estimation model. Specifically, we first construct virtual training and testing sets with some intentionally designed distribution differences between them. Our framework then uses the constructed sets to train the confidence estimation model through a virtual training and testing scheme leading it to learn knowledge that generalizes to diverse distributions. We show the effectiveness of our framework on both monocular depth estimation and image classification.
翻译:信心估算是一项旨在评估模型在部署期间预测产出的可信度的任务,最近由于对于安全部署深层模型的重要性而引起了许多研究关注。以前的工作概述了可靠的信心估算模型应具备的两个重要品质,即:在标签不平衡下良好表现的能力和处理各种分配外数据输入的能力。在这项工作中,我们提出了一个元学习框架,既能同时改进信任估算模型的两种品质。具体地说,我们首先建造虚拟培训和测试成套材料,并有故意设计的分配差异。我们的框架随后利用已建的成套材料,通过虚拟培训和测试计划培训信心估算模型,从而学习能够概括多样性分布的知识。我们展示了我们框架在单深度估算和图像分类两方面的有效性。