Efficient Bayesian inference remains a computational challenge in hierarchical models. Simulation-based approaches such as Markov Chain Monte Carlo methods are still popular but have a large computational cost. When dealing with the large class of Latent Gaussian Models, the INLA methodology embedded in the R-INLA software provides accurate Bayesian inference by computing deterministic mixture representation to approximate the joint posterior, from which marginals are computed. The INLA approach has from the beginning been targeting to approximate univariate posteriors. In this paper we lay out the development foundation of the tools for also providing joint approximations for subsets of the latent field. These approximations inherit Gaussian copula structure and additionally provide corrections for skewness. The same idea is carried forward also to sampling from the mixture representation, which we now can adjust for skewness.
翻译:高效的贝叶斯推论仍然是等级模型中的一个计算挑战。 以马可夫链条蒙特卡洛等模拟方法为基础的模拟方法仍然很受欢迎,但计算成本很高。 当处理大类Lient Gaussian模型时, R- INLA 软件中嵌入的INLA 方法通过计算确定性混合物代表法来提供准确的贝叶斯推论,以近似计算边际值的联合后部值。 INLA 方法从一开始就针对大约的单向后部值。 在本文中,我们提出了为潜在场子集提供联合近似值的工具的发展基础。 这些近似法继承了高斯大千叶结构, 并补充了对斯凯夫特的校正。 同一想法还被推进到从混合物代表法中取样,我们现在可以根据偏差进行调整。