Graph contrastive learning (GCL) is prevalent to tackle the supervision shortage issue in graph learning tasks. Many recent GCL methods have been proposed with various manually designed augmentation techniques, aiming to implement challenging augmentations on the original graph to yield robust representation. Although many of them achieve remarkable performances, existing GCL methods still struggle to improve model robustness without risking losing task-relevant information because they ignore the fact the augmentation-induced latent factors could be highly entangled with the original graph, thus it is more difficult to discriminate the task-relevant information from irrelevant information. Consequently, the learned representation is either brittle or unilluminating. In light of this, we introduce the Adversarial Cross-View Disentangled Graph Contrastive Learning (ACDGCL), which follows the information bottleneck principle to learn minimal yet sufficient representations from graph data. To be specific, our proposed model elicits the augmentation-invariant and augmentation-dependent factors separately. Except for the conventional contrastive loss which guarantees the consistency and sufficiency of the representations across different contrastive views, we introduce a cross-view reconstruction mechanism to pursue the representation disentanglement. Besides, an adversarial view is added as the third view of contrastive loss to enhance model robustness. We empirically demonstrate that our proposed model outperforms the state-of-the-arts on graph classification task over multiple benchmark datasets.
翻译:对比对比式学习(GCL)在解决图表学习任务中的监督短缺问题方面十分普遍。许多最新的GCL方法都是用各种手工设计的增强技术提出的,目的是在原始图表上实施具有挑战性的增强,以产生强有力的代表性。尽管其中许多方法取得了显著的成绩,但现有的GCL方法仍然在努力提高模型的稳健性,同时又不冒失去与任务相关的信息的风险,因为它们忽视了与任务相关的数据这一事实,因此可能与最初的图表高度纠缠在一起,因此将任务相关的信息与不相关的信息区别开来更为困难。因此,所学到的信息要么是模糊的,要么是没有启发性的。因此,我们引入了一个交叉视图的重建机制,以便根据信息瓶颈原则从图形数据中学习最低限度但足够的代表性。具体地说,我们提议的模型可能会产生增强性-内变量和增强性依赖性因素。除了传统的对比性损失,保证不同对比式观点的表述的一致性和充分性,我们引入了一个交叉视角的重建机制,以追求代表性不连贯的对比性对比性图表学习(ACDCL)学习。此外,一个拟议的对比式模型展示了我们提出的对比式模型的对比式模型超越了多重损失基准任务。