It is hard to directly implement Graph Neural Networks (GNNs) on large scaled graphs. Besides of existed neighbor sampling techniques, scalable methods decoupling graph convolutions and other learnable transformations into preprocessing and post classifier allow normal minibatch training. By replacing redundant concatenation operation with attention mechanism in SIGN, we propose Scalable and Adaptive Graph Neural Networks (SAGN). SAGN can adaptively gather neighborhood information among different hops. To further improve scalable models on semi-supervised learning tasks, we propose Self-Label-Enhance (SLE) framework combining self-training approach and label propagation in depth. We add base model with a scalable node label module. Then we iteratively train models and enhance train set in several stages. To generate input of node label module, we directly apply label propagation based on one-hot encoded label vectors without inner random masking. We find out that empirically the label leakage has been effectively alleviated after graph convolutions. The hard pseudo labels in enhanced train set participate in label propagation with true labels. Experiments on both inductive and transductive datasets demonstrate that, compared with other sampling-based and sampling-free methods, SAGN achieves better or comparable results and SLE can further improve performance.
翻译:很难直接在大比例图形中实施图形神经网络(GNNs) 。 除了现有的近邻取样技术外, 我们提议了可缩放的方法, 将图形变相和其他可学习的转换方法整合到预处理和后分类器中, 允许进行普通的小型培训。 通过在 SGI 中用关注机制取代冗余的组合操作, 我们提议了可缩放和可调适的图像神经网络( SAGN ) 。 SAGN 可以在不同跳槽中以适应方式收集周边信息。 为了进一步改进半监督学习任务的可缩放模型, 我们提议了将自我训练方法与深度标签传播相结合的自我拉贝- Enance( SLE) 框架。 我们添加了基础模型, 并使用可缩放的节点标签模块, 并在几个阶段中加强火车基模模型。 为了生成节点标签模块的输入, 我们直接应用基于单热编码标签矢量的标签传播, 而不使用内部随机遮罩。 我们发现, 在图形变压后, 将标签渗漏有效缓解。 在强化的列中设置的硬假标签中, 将参与标签传播, 并演示SGNGNB 的升级结果,, 和SV 将实现更精确的升级结果。