Many problems in machine learning are naturally expressed in the language of undirected graphical models. Here, we propose black-box learning and inference algorithms for undirected models that optimize a variational approximation to the log-likelihood of the model. Central to our approach is an upper bound on the log-partition function parametrized by a function q that we express as a flexible neural network. Our bound makes it possible to track the partition function during learning, to speed-up sampling, and to train a broad class of hybrid directed/undirected models via a unified variational inference framework. We empirically demonstrate the effectiveness of our method on several popular generative modeling datasets.
翻译:机器学习中的许多问题自然以非定向图形模型的语言来表达。 在这里, 我们建议对非定向模型使用黑盒学习和推算算算法, 这些模型能优化对模型日志相似性的偏差近似值。 我们的方法中心在于对日志分割函数的上层界限, 由我们以灵活的神经网络表达的函数q 来进行匹配。 我们的界限使得在学习过程中能够跟踪分割函数, 加速取样, 并通过统一的变异推断框架培训一大批混合定向/非定向模型。 我们的经验证明我们的方法在几个流行的基因模型数据集上的有效性 。