Domain Adaptation (DA) enables transferring a learning machine from a labeled source domain to an unlabeled target one. While remarkable advances have been made, most of the existing DA methods focus on improving the target accuracy at inference. How to estimate the predictive uncertainty of DA models is vital for decision-making in safety-critical scenarios but remains the boundary to explore. In this paper, we delve into the open problem of Calibration in DA, which is extremely challenging due to the coexistence of domain shift and the lack of target labels. We first reveal the dilemma that DA models learn higher accuracy at the expense of well-calibrated probabilities. Driven by this finding, we propose Transferable Calibration (TransCal) to achieve more accurate calibration with lower bias and variance in a unified hyperparameter-free optimization framework. As a general post-hoc calibration method, TransCal can be easily applied to recalibrate existing DA methods. Its efficacy has been justified both theoretically and empirically.
翻译:域适应 (DA) 能够将一个学习机器从标签源域域转移到一个没有标签的目标域。 虽然已经取得了显著的进展,但现有DA方法大多侧重于提高推论目标准确性。 如何估计DA模型的预测不确定性对于安全临界情景的决策至关重要,但仍是有待探索的界限。 在本文件中,我们深入探讨DA校准这一开放问题,由于域位转换同时并存和缺乏目标标签,这一问题极具挑战性。 我们首先揭示了进进进进进进进进进进进进进进,DA模型学习的精度较高,而牺牲了精确度的概率。 我们提出可转移校准(TransCal ), 以在统一的超参数无损优化框架内实现更精确的偏差和差异校准。 作为一般的热后校准方法, TransCal 很容易被应用来校准现有的DA方法。 其效果在理论上和实验上都是合理的。