Recently, contrastiveness-based augmentation surges a new climax in the computer vision domain, where some operations, including rotation, crop, and flip, combined with dedicated algorithms, dramatically increase the model generalization and robustness. Following this trend, some pioneering attempts employ the similar idea to graph data. Nevertheless, unlike images, it is much more difficult to design reasonable augmentations without changing the nature of graphs. Although exciting, the current graph contrastive learning does not achieve as promising performance as visual contrastive learning. We conjecture the current performance of graph contrastive learning might be limited by the violation of the label-invariant augmentation assumption. In light of this, we propose a label-invariant augmentation for graph-structured data to address this challenge. Different from the node/edge modification and subgraph extraction, we conduct the augmentation in the representation space and generate the augmented samples in the most difficult direction while keeping the label of augmented data the same as the original samples. In the semi-supervised scenario, we demonstrate our proposed method outperforms the classical graph neural network based methods and recent graph contrastive learning on eight benchmark graph-structured data, followed by several in-depth experiments to further explore the label-invariant augmentation in several aspects.
翻译:最近,以对比性为基础的增强在计算机视觉域中涌现出一个新的高潮,在计算机视觉域中,一些操作,包括旋转、作物和翻转,加上专门的算法,大大增加了模型的概括性和稳健性。在这一趋势之后,一些开拓性尝试将类似的想法用于图形数据。然而,与图像不同的是,设计合理的放大而不改变图表性质则困难得多。虽然令人兴奋,但目前的图形对比性学习并没有像视觉对比性学习那样取得有希望的性能。我们推测,目前图形对比性学习的绩效可能受到违反标签变量增强假设的限制。根据这一点,我们提议为图表结构化数据增加一个标签变量,以应对这一挑战。不同于节点/格修改和子绘图提取,我们扩大代表空间,并在最困难的方向上生成增强的样本,同时将扩大数据标签与原始的样本保持相同。在半封闭的假设中,我们提出的方法可能因违反标签变量增强假设而有所限制。我们提出的方法可能因违反标签的模型而超出一些古典神经网络,而最近用图表化的模型对8个基点结构增强度数据进行进一步的升级研究,随后,在8个基建基数的升级的升级的升级的基数的基建的升级数据方面,以探索性增强数据。