Neural networks make accurate predictions but often fail to provide reliable uncertainty estimates, especially under covariate distribution shifts between training and testing. To address this problem, we propose a Bayesian framework for uncertainty estimation that explicitly accounts for covariate shifts. While conventional approaches rely on fixed priors, the key idea of our method is an adaptive prior, conditioned on both training and new covariates. This prior naturally increases uncertainty for inputs that lie far from the training distribution in regions where predictive performance is likely to degrade. To efficiently approximate the resulting posterior predictive distribution, we employ amortized variational inference. Finally, we construct synthetic environments by drawing small bootstrap samples from the training data, simulating a range of plausible covariate shift using only the original dataset. We evaluate our method on both synthetic and real-world data. It yields substantially improved uncertainty estimates under distribution shifts.
翻译:神经网络能够做出准确预测,但往往无法提供可靠的不确定性估计,尤其是在训练与测试之间存在协变量分布偏移的情况下。为解决此问题,我们提出了一种贝叶斯框架用于不确定性估计,该框架明确考虑了协变量偏移。传统方法依赖于固定先验,而我们方法的核心思想是采用自适应先验,该先验以训练数据和新协变量为条件。对于远离训练分布的输入,该先验自然会在预测性能可能下降的区域增加不确定性。为有效近似所得的后验预测分布,我们采用摊销变分推断方法。最后,我们通过从训练数据中抽取小型自助样本构建合成环境,仅使用原始数据集模拟一系列可能的协变量偏移。我们在合成数据和真实数据上评估了所提方法。结果表明,在分布偏移下,该方法能显著改进不确定性估计。