A graph generative model defines a distribution over graphs. One type of generative model is constructed by autoregressive neural networks, which sequentially add nodes and edges to generate a graph. However, the likelihood of a graph under the autoregressive model is intractable, as there are numerous sequences leading to the given graph; this makes maximum likelihood estimation challenging. Instead, in this work we derive the exact joint probability over the graph and the node ordering of the sequential process. From the joint, we approximately marginalize out the node orderings and compute a lower bound on the log-likelihood using variational inference. We train graph generative models by maximizing this bound, without using the ad-hoc node orderings of previous methods. Our experiments show that the log-likelihood bound is significantly tighter than the bound of previous schemes. Moreover, the models fitted with the proposed algorithm can generate high-quality graphs that match the structures of target graphs not seen during training. We have made our code publicly available at \hyperref[https://github.com/tufts-ml/graph-generation-vi]{https://github.com/tufts-ml/graph-generation-vi}.
翻译:图形基因模型定义了图形的分布。 一种类型的基因模型是由自动递减神经网络构造的, 这些网络依次添加节点和边缘以生成图形。 然而, 在自动递减模型下绘制图表的可能性是棘手的, 因为有多个序列导致给定图; 这使得最大的可能性估算具有挑战性。 相反, 在这项工作中, 我们从图表和顺序顺序的节点顺序上得出准确的共同概率。 从联合中, 我们大约将节点命令排挤出去, 并用变异的推断来计算对日志相似值的下限。 我们通过不使用先前方法的 ad- hoc节点排序来培训图形基因模型。 我们的实验显示, 日志- 相似度绑定比先前方法的界限大得多。 此外, 与拟议算法相配的模型可以生成与培训期间未见的目标图形结构相匹配的高品质的图表。 我们已在[https://github.com/tufts- imp- /graph- {gis- sage- sqion_ a_d_ dagesion]