Graph convolutional networks (GCNs) allow us to learn topologically-aware node embeddings, which can be useful for classification or link prediction. However, by construction, they lack positional awareness and are unable to capture long-range dependencies without adding additional layers -- which in turn leads to over-smoothing and increased time and space complexity. Further, the complex dependencies between nodes make mini-batching challenging, limiting their applicability to large graphs. This paper proposes a Scalable Multi-resolution Graph Representation Learning (SMGRL) framework that enables us to learn multi-resolution node embeddings efficiently. Our framework is model-agnostic and can be applied to any existing GCN model. We dramatically reduce training costs by training only on a reduced-dimension coarsening of the original graph, then exploit self-similarity to apply the resulting algorithm at multiple resolutions. Inference of these multi-resolution embeddings can be distributed across multiple machines to reduce computational and memory requirements further. The resulting multi-resolution embeddings can be aggregated to yield high-quality node embeddings that capture both long- and short-range dependencies between nodes. Our experiments show that this leads to improved classification accuracy, without incurring high computational costs.
翻译:图形变迁网络( GCNs) 使我们能够学习可用于分类或链接预测的地形觉知节点嵌入。 但是,通过构建,它们缺乏定位意识,无法捕捉长距离依赖性而不增加额外的层层 -- -- 这反过来又会导致过度移动和增加时间和空间复杂性。此外,节点之间的复杂依赖性使得小型交错具有挑战性,限制了它们对大图表的适用性。本文件提出了一个可缩放的多分辨率图表显示学习框架( SMGRL ), 使我们能够有效地学习多分辨率嵌入。 我们的框架是模型的认知性, 并且可以应用到任何现有的GCN模式。 我们通过培训, 只能通过减少原始图形的四分化分解和增加时间和空间复杂性来大幅降低培训成本。 这些多分辨率嵌入的推论可以分布在多个机器中, 以进一步降低计算和记忆要求。 由此产生的多分辨率嵌入框架可以整合成高分辨率的节点嵌入。 我们的框架是模型, 可用于任何现有的GCN 模式。 我们通过培训来大幅降低培训成本, 然后利用自我相似性, 从而显示长期和高度的折算。