In prediction problems, it is common to model the data-generating process and then use a model-based procedure, such as a Bayesian predictive distribution, to quantify uncertainty about the next observation. However, if the posited model is misspecified, then its predictions may not be calibrated -- that is, the predictive distribution's quantiles may not be nominal frequentist prediction upper limits, even asymptotically. Rather than abandoning the comfort of a model-based formulation for a more complicated non-model-based approach, here we propose a strategy in which the data itself helps determine if the assumed model-based solution should be adjusted to account for model misspecification. This is achieved through a generalized Bayes formulation where a learning rate parameter is tuned, via the proposed generalized predictive calibration (GPrC) algorithm, to make the predictive distribution calibrated, even under model misspecification. Extensive numerical experiments are presented, under a variety of settings, demonstrating the proposed GPrC algorithm's validity, efficiency, and robustness.
翻译:在预测问题中,通常的做法是建模数据生成过程,然后使用模型式程序,例如贝叶斯预测分布,对下一个观测的不确定性进行量化。然而,如果假设模型的描述错误,那么其预测可能不会被校准 -- -- 即预测分布的孔径可能不是名义上的常客预测上限,即使是暂时性的。我们在此提出一种战略,不是放弃基于模型的配方的舒适度,以采用更为复杂的非模型式方法,而是让数据本身能够帮助确定假设的模型式解决办法是否应该调整,以考虑到模型的误差。这是通过一种通用的海湾配方方法实现的,即通过拟议的通用预测校准(GPrC)算法调整学习率参数,使预测分布得到校准,即使是在模型误差的情况下也是如此。在各种环境下,提出了广泛的数字实验,展示了拟议的GPrC算法的有效性、效率和稳健性。