A wide range of models have been proposed for Graph Generative Models, necessitating effective methods to evaluate their quality. So far, most techniques use either traditional metrics based on subgraph counting, or the representations of randomly initialized Graph Neural Networks (GNNs). We propose using representations from contrastively trained GNNs, rather than random GNNs, and show this gives more reliable evaluation metrics. Neither traditional approaches nor GNN-based approaches dominate the other, however: we give examples of graphs that each approach is unable to distinguish. We demonstrate that Graph Substructure Networks (GSNs), which in a way combine both approaches, are better at distinguishing the distances between graph datasets.
翻译:为图表生成模型提出了范围广泛的模型,这需要有效的方法来评价其质量。迄今为止,大多数技术要么使用基于子数计算的传统计量方法,要么使用随机初始化的图形神经网络(GNNs)的表示方式。 我们提议使用经过不同培训的GNNs而不是随机型GNS的表示方式,并显示更可靠的评价尺度。但是,无论是传统的方法还是基于GNN的方法都没有主导其他方法:我们举出每一种方法都无法区分的图表实例。我们证明,将两种方法结合起来的图形子结构网络(GSNs)在区分图表数据集之间的距离方面做得更好。