Scientists continue to develop increasingly complex mechanistic models to reflect their knowledge more realistically. Statistical inference using these models can be highly challenging since the corresponding likelihood function is often intractable and model simulation may be computationally burdensome. Fortunately, in many of these situations, it is possible to adopt a surrogate model or approximate likelihood function. It may be convenient to base Bayesian inference directly on the surrogate, but this can result in bias and poor uncertainty quantification. In this paper we propose a new method for adjusting approximate posterior samples to reduce bias and produce more accurate uncertainty quantification. We do this by optimising a transform of the approximate posterior that maximises a scoring rule. Our approach requires only a (fixed) small number of complex model simulations and is numerically stable. We demonstrate good performance of the new method on several examples of increasing complexity.
翻译:科学家们继续开发日益复杂的机械模型,以便更现实地反映他们的知识。使用这些模型的统计推论可能具有极大的挑战性,因为相应的概率功能往往难以操作,模型模拟可能具有计算负担。幸运的是,在许多这类情况下,有可能采用代用模型或近似概率功能。可能方便地将巴耶斯的推论直接建立在代用模型上,但这可能会导致偏差和不确定性的量化不力。在本文中,我们提出了调整近似近似后方样本的新方法,以减少偏差,并产生更准确的不确定性量化。我们这样做的方法是优化近似近似后台的转换,使评分规则最大化。我们的方法只需要少量(固定的)复杂模型模拟,并且数字上稳定。我们在若干日益复杂的例子中展示了新方法的良好表现。</s>