大多数概率模型中, 计算后验边际或准确计算归一化常数都是很困难的. 变分推断(variational inference)是一个近似计算这两者的框架. 变分推断把推断看作优化问题: 我们尝试根据某种距离度量来寻找一个与真实后验尽可能接近的分布(或者类似分布的表示).

VIP内容

概率图模型的形式化为捕获随机变量之间的复杂依赖关系和建立大规模多元统计模型提供了统一的框架。图模型已经成为许多统计、计算和数学领域的研究焦点,包括生物信息学、通信理论、统计物理、组合优化、信号和图像处理、信息检索和统计机器学习。在特定情况下出现的许多问题- -包括计算边缘值和概率分布模式的关键问题。利用指数族表示,并利用指数族累积函数和熵之间的共轭对偶性,我们提出了计算概率、边际概率和最可能配置问题的一般变分表示。我们描述了各种各样的算法,其中sum-product集群变分方法,expectation-propagation,平均场方法,max-product和线性规划松弛——都可以理解的精确或近似形式的变分表示。变分方法提供了一个补充替代马尔科夫链蒙特卡洛作为在大规模统计模型推理的方法。

https://www.nowpublishers.com/article/Details/MAL-001

成为VIP会员查看完整内容
0
38

最新内容

Gaussian mixture models are a popular tool for model-based clustering, and mixtures of factor analyzers are Gaussian mixture models having parsimonious factor covariance structure for mixture components. There are several recent extensions of mixture of factor analyzers to deep mixtures, where the Gaussian model for the latent factors is replaced by a mixture of factor analyzers. This construction can be iterated to obtain a model with many layers. These deep models are challenging to fit, and we consider Bayesian inference using sparsity priors to further regularize the estimation. A scalable natural gradient variational inference algorithm is developed for fitting the model, and we suggest computationally efficient approaches to the architecture choice using overfitted mixtures where unnecessary components drop out in the estimation. In a number of simulated and two real examples, we demonstrate the versatility of our approach for high-dimensional problems, and demonstrate that the use of sparsity inducing priors can be helpful for obtaining improved clustering results.

0
0
下载
预览

最新论文

Gaussian mixture models are a popular tool for model-based clustering, and mixtures of factor analyzers are Gaussian mixture models having parsimonious factor covariance structure for mixture components. There are several recent extensions of mixture of factor analyzers to deep mixtures, where the Gaussian model for the latent factors is replaced by a mixture of factor analyzers. This construction can be iterated to obtain a model with many layers. These deep models are challenging to fit, and we consider Bayesian inference using sparsity priors to further regularize the estimation. A scalable natural gradient variational inference algorithm is developed for fitting the model, and we suggest computationally efficient approaches to the architecture choice using overfitted mixtures where unnecessary components drop out in the estimation. In a number of simulated and two real examples, we demonstrate the versatility of our approach for high-dimensional problems, and demonstrate that the use of sparsity inducing priors can be helpful for obtaining improved clustering results.

0
0
下载
预览
Top