Epistemic uncertainty quantification is a crucial part of drawing credible conclusions from predictive models, whether concerned about the prediction at a given point or any downstream evaluation that uses the model as input. When the predictive model is simple and its evaluation differentiable, this task is solved by the delta method, where we propagate the asymptotically-normal uncertainty in the predictive model through the evaluation to compute standard errors and Wald confidence intervals. However, this becomes difficult when the model and/or evaluation becomes more complex. Remedies include the bootstrap, but it can be computationally infeasible when training the model even once is costly. In this paper, we propose an alternative, the implicit delta method, which works by infinitesimally regularizing the training loss of the predictive model to automatically assess downstream uncertainty. We show that the change in the evaluation due to regularization is consistent for the asymptotic variance of the evaluation estimator, even when the infinitesimal change is approximated by a finite difference. This provides both a reliable quantification of uncertainty in terms of standard errors as well as permits the construction of calibrated confidence intervals. We discuss connections to other approaches to uncertainty quantification, both Bayesian and frequentist, and demonstrate our approach empirically.
翻译:预测性不确定性量化是从预测性模型中得出可信结论的关键部分,无论是关注某一点的预测,还是关注使用模型作为投入的任何下游评价。当预测性模型简单且评价不尽相同时,这项任务就由三角洲方法来解决,我们通过评价来传播预测性模型中非偶然的不确定性,以计算标准错误和降低信任间隔。然而,当模型和/或评价变得更加复杂时,这一点就变得困难了。补救措施包括靴子陷阱,但当培训模型甚至一次费用昂贵时,也可能在计算上不可行。在本文中,我们提出一种替代方法,即隐含三角洲方法,其作用是将预测性模型的培训损失尽可能适度地正规化,以便自动评估下游不确定性。我们表明,由于正规化而导致的评估变化与评价估量器的无明显差异是一致的,即使微小的变化是有限的。这既提供了标准错误的不确定性的可靠量化,也允许构建校准性信任度方法。我们讨论了海湾地区与其他不确定性的联系。