In this paper, we present a general framework to scale graph autoencoders (AE) and graph variational autoencoders (VAE). This framework leverages graph degeneracy concepts to train models only from a dense subset of nodes instead of using the entire graph. Together with a simple yet effective propagation mechanism, our approach significantly improves scalability and training speed while preserving performance. We evaluate and discuss our method on several variants of existing graph AE and VAE, providing the first application of these models to large graphs with up to millions of nodes and edges. We achieve empirically competitive results w.r.t. several popular scalable node embedding methods, which emphasizes the relevance of pursuing further research towards more scalable graph AE and VAE.
翻译:在本文中,我们提出了一个缩放图形自动转换器(AE)和图形变异自动转换器(VAE)的总框架。这个框架利用图形变异性概念,将模型只从密集的节点子组而不是使用整个图表来训练模型。我们的方法与一个简单而有效的传播机制一道,大大提高了可缩放性和培训速度,同时保持了性能。我们评估和讨论关于现有图表AE和VAE的若干变体的方法,为多达数百万节点和边缘的大图提供了这些模型的首次应用。我们取得了一些广受欢迎的可缩放节点嵌入方法的经验性竞争性结果,这些方法强调了进一步研究更可缩放的图表AE和VAE的相关性。