Generalised Bayesian inference updates prior beliefs using a loss function, rather than a likelihood, and can therefore be used to confer robustness against possible mis-specification of the likelihood. Here we consider generalised Bayesian inference with a Stein discrepancy as a loss function, motivated by applications in which the likelihood contains an intractable normalisation constant. In this context, the Stein discrepancy circumvents evaluation of the normalisation constant and produces generalised posteriors that are either closed form or accessible using standard Markov chain Monte Carlo. On a theoretical level, we show consistency, asymptotic normality, and bias-robustness of the generalised posterior, highlighting how these properties are impacted by the choice of Stein discrepancy. Then, we provide numerical experiments on a range of intractable distributions, including applications to kernel-based exponential family models and non-Gaussian graphical models.
翻译:通用贝叶斯推断法用损失函数而不是可能性来更新先前的信念, 并因此可以用来针对可能存在的可能性的错误具体化赋予强健性。 我们在这里将普通的贝叶斯推断法与斯坦差异法视为一种损失函数, 其动机是可能包含难以调和的常态常数的应用程序。 在这方面, 斯坦因差异绕过对正常化常数的评价, 并产生一般化的后遗症, 使用标准的 Markov 链 Monte Carlo 来进行这种评估。 在理论层面上, 我们展示了一致性、 无症状的正常性, 以及一般化后遗症的偏差, 突出说明了这些特性如何受到斯坦差异选择的影响。 然后, 我们对一系列棘手的分布进行了数字实验, 包括对内核指数型家庭模型和非加日文图形模型的应用 。