A restricted Boltzmann machine (RBM) is an undirected graphical model constructed for discrete or continuous random variables, with two layers, one hidden and one visible, and no conditional dependency within a layer. In recent years, RBMs have risen to prominence due to their connection to deep learning. By treating a hidden layer of one RBM as the visible layer in a second RBM, a deep architecture can be created. RBMs are thought to thereby have the ability to encode very complex and rich structures in data, making them attractive for supervised learning. However, the generative behavior of RBMs is largely unexplored. In this paper, we discuss the relationship between RBM parameter specification in the binary case and model properties such as degeneracy, instability and uninterpretability. We also describe the difficulties that arise in likelihood-based and Bayes fitting of such (highly flexible) models, especially as Gibbs sampling (quasi-Bayes) methods are often advocated for the RBM model structure.
翻译:限制的Boltzmann机器(RBM)是一个为离散或连续随机变量而建立的无方向的图形模型,有两层,一层隐藏,一层可见,一层不附带条件,在一层内没有依赖性。近年来,成果管理制因其与深层学习的联系而越来越突出。通过将一个成果管理制的隐藏层作为第二个成果管理制的可见层,可以创建一个深层结构。成果管理制被认为因此有能力将非常复杂和丰富的数据结构编码起来,使其对监督的学习具有吸引力。但是,成果管理制的基因行为基本上没有被探讨。在本文件中,我们讨论了二进制案例的成果管理制参数规格与模型属性之间的关系,例如退化、不稳定和不易解释性。我们还描述了在可能性基础上和贝斯安装这种(高度灵活)模型时产生的困难,特别是由于常常为成果管理制模型结构提倡Gibbs取样(qis-Bayes)方法。