We provide a general solution to a fundamental open problem in Bayesian inference, namely poor uncertainty quantification, from a frequency standpoint, of Bayesian methods in misspecified models. While existing solutions are based on explicit Gaussian approximations of the posterior, or computationally onerous post-processing procedures, we demonstrate that correct uncertainty quantification can be achieved by replacing the usual posterior with an intuitive approximate posterior. Critically, our solution is applicable to likelihood-based, and generalized, posteriors as well as cases where the likelihood is intractable and must be estimated. We formally demonstrate the reliable uncertainty quantification of our proposed approach, and show that valid uncertainty quantification is not an asymptotic result but occurs even in small samples. We illustrate this approach through a range of examples, including linear, and generalized, mixed effects models.
翻译:我们为巴伊西亚推论中一个根本的开放问题提供了一种一般性的解决办法,即从频率角度出发,对巴伊西亚方法在错误描述模型中的不确定性的定量不高;虽然现有解决办法基于对后继物的明确的高斯近似值,或计算繁琐的后处理程序,但我们证明,可以用直观的近似后继物取代通常的后继物,从而实现正确的不确定性的量化。关键的是,我们的解决办法适用于基于可能性的、普遍的后继物以及可能性难以解决且必须加以估计的情况。我们正式表明,我们拟议方法的可靠的不确定性量化,并表明有效的不确定性量化并非一种无药可治的结果,而是甚至在小样本中发生。我们通过一系列实例,包括直线效应模型和普遍的混合效应模型,来说明这一方法。