概率图模型是图灵奖获得者Pearl开发出来的用图来表示变量概率依赖关系的理论。概率图模型理论分为概率图模型表示理论,概率图模型推理理论和概率图模型学习理论。

VIP内容

前言 在这本书中,我们从图形模型的基础知识、它们的类型、为什么使用它们以及它们解决了什么类型的问题开始。然后我们在图形模型的上下文中探索子问题,例如它们的表示、构建它们、学习它们的结构和参数,以及使用它们回答我们的推理查询。

这本书试图提供足够的理论信息,然后使用代码示例窥视幕后,以了解一些算法是如何实现的。代码示例还提供了一个方便的模板,用于构建图形模型和回答概率查询。在文献中描述的许多种类的图形模型中,这本书主要关注离散贝叶斯网络,偶尔也有来自马尔科夫网络的例子。

内容概述

  • 第一章:概率论,涵盖了理解图形模型所需的概率论的概念。

  • 第2章:有向图形模型,提供了关于贝叶斯网络的信息,他们的属性相关的独立性,条件独立性,和D分离。本章使用代码片段加载贝叶斯网络并理解其独立性。

  • 第三章:无向图模型,介绍了马尔可夫网络的性质,马尔可夫网络与贝叶斯网络的区别,以及马尔可夫网络的独立性。

  • 第四章:结构学习,涵盖了使用数据集来推断贝叶斯网络结构的多种方法。我们还学习了结构学习的计算复杂性,并在本章使用代码片段来学习抽样数据集中给出的结构。

  • 第5章:参数学习,介绍了参数学习的最大似然法和贝叶斯方法。

  • 第6章:使用图形模型的精确推理,解释了精确推理的变量消除算法,并探索了使用相同算法回答我们的推理查询的代码片段。

  • 第7章:近似推理方法,探讨了网络太大而无法进行精确推理的近似推理。我们还将通过在马尔科夫网络上使用循环信念传播运行近似推论的代码样本。

目录

成为VIP会员查看完整内容
0
70

最新内容

We introduce a new family of energy-based probabilistic graphical models for efficient unsupervised learning. Its definition is motivated by the control of the spin-glass properties of the Ising model described by the weights of Boltzmann machines. We use it to learn the Bars and Stripes dataset of various sizes and the MNIST dataset, and show how they quickly achieve the performance offered by standard methods for unsupervised learning. Our results indicate that the standard initialization of Boltzmann machines with random weights equivalent to spin-glass models is an unnecessary bottleneck in the process of training. Furthermore, this new family allows for very easy access to low-energy configurations, which points to new, efficient training algorithms. The simplest variant of such algorithms approximates the negative phase of the log-likelihood gradient with no Markov chain Monte Carlo sampling costs at all, and with an accuracy sufficient to achieve good learning and generalization.

0
0
下载
预览

最新论文

We introduce a new family of energy-based probabilistic graphical models for efficient unsupervised learning. Its definition is motivated by the control of the spin-glass properties of the Ising model described by the weights of Boltzmann machines. We use it to learn the Bars and Stripes dataset of various sizes and the MNIST dataset, and show how they quickly achieve the performance offered by standard methods for unsupervised learning. Our results indicate that the standard initialization of Boltzmann machines with random weights equivalent to spin-glass models is an unnecessary bottleneck in the process of training. Furthermore, this new family allows for very easy access to low-energy configurations, which points to new, efficient training algorithms. The simplest variant of such algorithms approximates the negative phase of the log-likelihood gradient with no Markov chain Monte Carlo sampling costs at all, and with an accuracy sufficient to achieve good learning and generalization.

0
0
下载
预览
Top