Graph representation learning (GRL) is critical for graph-structured data analysis. However, most of the existing graph neural networks (GNNs) heavily rely on labeling information, which is normally expensive to obtain in the real world. Although some existing works aim to effectively learn graph representations in an unsupervised manner, they suffer from certain limitations, such as the heavy reliance on monotone contrastiveness and limited scalability. To overcome the aforementioned problems, we introduce a novel self-supervised graph representation learning algorithm via Graph Contrastive Adjusted Zooming, namely G-Zoom, to learn node representations by leveraging the proposed adjusted zooming scheme. Specifically, this mechanism enables G-Zoom to explore and extract self-supervision signals from a graph from multiple scales: micro (i.e., node-level), meso (i.e., neighborhood-level), and macro (i.e., subgraph-level). Firstly, we generate two augmented views of the input graph via two different graph augmentations. Then, we establish three different contrastiveness on the above three scales progressively, from node, neighboring, to subgraph level, where we maximize the agreement between graph representations across scales. While we can extract valuable clues from a given graph on the micro and macro perspectives, the neighboring-level contrastiveness offers G-Zoom the capability of a customizable option based on our adjusted zooming scheme to manually choose an optimal viewpoint that lies between the micro and macro perspectives to better understand the graph data. Additionally, to make our model scalable to large graphs, we employ a parallel graph diffusion approach to decouple model training from the graph size. We have conducted extensive experiments on real-world datasets, and the results demonstrate that our proposed model outperforms state-of-the-art methods consistently.
翻译:(GRL) 图形结构化数据分析非常关键 。 然而, 大部分现有的图形神经网络( GNN) 严重依赖标签信息, 通常在现实世界中要花费很多。 虽然一些现有工作的目的是以不受监督的方式有效地学习图形的表达方式, 但它们受到某些限制, 例如严重依赖单单线对比度和缩放性。 为了克服上述问题, 我们引入了一种新的自监督图形代表学习算法, 通过图形对比调整 Zoom, 即 G- Zoom, 通过利用拟议调整的缩放方案学习节点表示。 具体地说, 这个机制使G- Zom 能够从多个比例的图表中探索和提取自我监督的图像信号: 微观( 即, 诺德一级) 、 介质( 邻里一级) 和 宏观( 地平面) 模式。 首先, 我们通过两个不同的图形直径直径直的缩图放大方法, 产生两种对输入图表的放大视图。 然后, 我们在三个比例上建立三种不同的传播度, 。 从不同时, 深处, 进行深度的缩缩缩缩缩缩缩缩缩图 数据显示, 我们从一个比例, 从一个比例, 进行, 我们的缩缩缩图, 显示一个比例, 从一个比例, 我们的缩缩缩的缩图, 显示, 显示, 从我们从一个基的缩缩缩的缩的缩的缩的缩的缩图, 显示到缩的缩图, 显示一个到缩的缩算, 显示一个比例, 显示, 显示, 显示一个比例, 显示一个比例,, 我们的缩图, 显示我们从一个基的缩算, 显示一个基的缩算, 显示一个基的缩图, 显示一个到 显示, 显示, 显示, 显示, 显示, 显示, 显示, 显示, 从一个, 显示, 我们的缩 显示, 显示, 显示, 我们的缩图, 从一个 显示, 从一个到 显示, 从一个 显示, 显示, 显示, 和小的缩图, 从一个 显示, 显示, 显示, 显示, 我们的缩缩缩缩缩缩缩缩图, 显示, 显示, 显示, 从一个