Several indices used in a factor graph data structure can be permuted without changing the underlying probability distribution. An algorithm that performs inference on a factor graph should ideally be equivariant or invariant to permutations of global indices of nodes, variable orderings within a factor, and variable assignment orderings. However, existing neural network-based inference procedures fail to take advantage of this inductive bias. In this paper, we precisely characterize these isomorphic properties of factor graphs and propose two inference models: Factor-Equivariant Neural Belief Propagation (FE-NBP) and Factor-Equivariant Graph Neural Networks (FE-GNN). FE-NBP is a neural network that generalizes BP and respects each of the above properties of factor graphs while FE-GNN is an expressive GNN model that relaxes an isomorphic property in favor of greater expressivity. Empirically, we demonstrate on both real-world and synthetic datasets, for both marginal inference and MAP inference, that FE-NBP and FE-GNN together cover a range of sample complexity regimes: FE-NBP achieves state-of-the-art performance on small datasets while FE-GNN achieves state-of-the-art performance on large datasets.
翻译:在系数图数据结构中使用的几种指数可以不改变基本概率分布而进行排列,在系数图上进行推论的算法最好应该是对节点全球指数、系数内可变顺序和可变分配顺序的变异性或变异性,但是,现有的神经网络推论程序未能利用这种感应偏差。在本文中,我们精确地描述系数图的这些不形态特性,并提出两种推论模型:系数-等异神经信仰促进(FE-NBP)和系数-等异性图形神经网络(FE-GNNNN)。FE-NBP是一个神经网络,它一般化BP,并尊重上述要素图的每一项属性,而FE-GNNN是显示G模型的一种表达式GNNNNN和合成数据结构,用于边际的边际-NBP-NP-FS-NP-S-S-S-Serg-FNP-S-S-Sergen-S-C-Serg-Serg-S-S-Servical sy-C-C-Servical-C-G-NPNP-C-Set-Set-E-Set sal-Setal-Sal sal-Set sal-Set sal-Setg-S-S-S-S-S-Setal-Sal-C-S-S-S-S-S-S-Setmental 数据,同时实现FNP-C-C-C-C-C-S-C-Setmental-S-S-S-S-S-SD-S-S-S-E-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-SD-SD-C-Setal-SD-SD-SD-SD-F-S-S-S-S-SD-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-