Bayesian inference in generalized linear models (GLMs), i.e.~Gaussian regression with non-Gaussian likelihoods, is generally non-analytic and requires computationally expensive approximations, such as sampling or variational inference. We propose an approximate inference framework primarily designed to be computationally cheap while still achieving high approximation quality. The concept, which we call \emph{Laplace Matching}, involves closed-form, approximate, bi-directional transformations between the parameter spaces of exponential families. These are constructed from Laplace approximations under custom-designed basis transformations. The mappings can then be leveraged to effectively turn a latent Gaussian distribution into a conjugate prior for a rich class of observable variables. This effectively turns inference in GLMs into conjugate inference (with small approximation errors). We empirically evaluate the method in two different GLMs, showing approximation quality comparable to state-of-the-art approximate inference techniques at a drastic reduction in computational cost. More specifically, our method has a cost comparable to the \emph{very first} step of the iterative optimization usually employed in standard GLM inference.
翻译:在一般线性模型(GLMs)中,即使用非Gausian可能性的Gausian回归(GLMs),泛泛线性模型(GLMs)中,Bayesian 的推论(GLMs),即使用非Gausian可能性的Gausian回归(GLMs),一般是非分析性的,需要计算昂贵的近似点,例如抽样或变异推论。我们建议了一个大致的推论框架,主要设计为计算成本低,但主要设计为成本低廉,同时仍然达到高近似质量。我们称这个概念涉及指数家庭参数空间之间的近似近似、近似、双向转换。根据定制的基础转换法,这些近似点是构建的。然后,这些绘图可以被利用,将潜在的高山分布有效地转换成一个类似丰富的可观测变量类别之前的组合体。这有效地将GLMs的推论转化为共振(近似误差小) 。我们实际评估了两个GLMSM方法中的方法,其近似质量与计算成本急剧下降。更低。更精确。更精确地,我们的方法在标准中通常采用了GMeximlevlexerferferferv。