We introduce neural Markov logic networks (NMLNs), a statistical relational learning system that borrows ideas from Markov logic. Like Markov logic networks (MLNs), NMLNs are an exponential-family model for modelling distributions over possible worlds, but unlike MLNs, they do not rely on explicitly specified first-order logic rules. Instead, NMLNs learn an implicit representation of such rules as a neural network that acts as a potential function on fragments of the relational structure. Similarly to many neural symbolic methods, NMLNs can exploit embeddings of constants but, unlike them, NMLNs work well also in their absence. This is extremely important for predicting in settings other than the transductive one. We showcase the potential of NMLNs on knowledge-base completion, triple classification and on generation of molecular (graph) data.
翻译:我们引入了Neural Markov逻辑网络(NMLNs),这是一个从Markov逻辑中借用思想的统计关系学习系统。与Markov逻辑网络(MLNs)一样,NMLNs是建模分布在可能的世界中的指数-家庭模式,但与MLNs不同,它们并不依赖于明确规定的第一阶逻辑规则。相反,NMLNs学会了这种规则的隐含形式,如神经网络,作为关系结构碎片的潜在功能。与许多神经象征性方法一样,NMLNs可以利用常数的嵌入,但与它们不同,NMLNs在不存在时也可以很好地发挥作用。这对于在非转基因环境的预测极为重要。我们展示了NMLNs在知识基础完成、三级分类和分子数据生成方面的潜力。