As large-scale graphs become increasingly more prevalent, it poses significant computational challenges to process, extract and analyze large graph data. Graph coarsening is one popular technique to reduce the size of a graph while maintaining essential properties. Despite rich graph coarsening literature, there is only limited exploration of data-driven methods in the field. In this work, we leverage the recent progress of deep learning on graphs for graph coarsening. We first propose a framework for measuring the quality of coarsening algorithm and show that depending on the goal, we need to carefully choose the Laplace operator on the coarse graph and associated projection/lift operators. Motivated by the observation that the current choice of edge weight for the coarse graph may be sub-optimal, we parametrize the weight assignment map with graph neural networks and train it to improve the coarsening quality in an unsupervised way. Through extensive experiments on both synthetic and real networks, we demonstrate that our method significantly improves common graph coarsening methods under various metrics, reduction ratios, graph sizes, and graph types. It generalizes to graphs of larger size ($25\times$ of training graphs), is adaptive to different losses (differentiable and non-differentiable), and scales to much larger graphs than previous work.
翻译:随着大比例图越来越普遍,它给处理、提取和分析大比例图数据带来了巨大的计算挑战。图表粗化是一种常用的方法,可以减少图表的大小,同时保持基本特性。尽管图表粗化的文献丰富,但在实地对数据驱动方法的探索有限。在这项工作中,我们利用最近对图表粗化法的深度学习进展来提高图表粗化质量。我们首先提出一个测量粗化算法质量的框架,并表明,根据目标,我们需要仔细选择粗缩图和相关投影/升算法中的拉普特操作员。我们之所以采用这种方法,是因为观察到粗粗略图目前对边重的选择可能是次最佳的,我们用图表神经网络对权重分配图进行微量分析,并训练它以不受监督的方式改进粗化的图表质量。我们首先在合成和真实网络上进行广泛的实验,我们证明我们的方法大大改进了不同指标、降低比率、图表大小和图表类型下常见的图表粗化方法。它一般地概括了粗糙的图表规模(25年和不同比例的图表)的图表损失,比以往的图表要大得多。