Generalised Bayesian inference updates prior beliefs using a loss function, rather than a likelihood, and can therefore be used to confer robustness against possible misspecification of the likelihood. Here we consider generalised Bayesian inference with a Stein discrepancy as a loss function, motivated by applications in which the likelihood contains an intractable normalisation constant. In this context, the Stein discrepancy circumvents evaluation of the normalisation constant and produces generalised posteriors that are either closed form or accessible using standard Markov chain Monte Carlo. On a theoretical level, we show consistency, asymptotic normality, and bias-robustness of the generalised posterior, highlighting how these properties are impacted by the choice of Stein discrepancy. Then, we provide numerical experiments on a range of intractable distributions, including applications to kernel-based exponential family models and non-Gaussian graphical models.
翻译:通用贝叶斯推断法用损失函数而不是可能性来更新先前的信念,因此,可以用来针对可能存在的可能性的偏差进行稳健性分析。在这里,我们认为通用贝叶斯推断法与斯坦差异是一种损失函数,其动机是可能包含难解的正常化常数的应用程序。在这方面,斯坦因差异绕过对正常化常数的评价,并用标准的Markov链 Monte Carlo 来生成封闭形式或可获取的通俗的子孙。在理论层面上,我们展示了通用后遗物的一致性、无药可治的正常性和偏向性,凸显了这些特性如何受到斯坦差异选择的影响。然后,我们提供了一系列难解分布的数值实验,包括对内核指数型家庭模型和非Gaussian图形模型的应用。