We present masked graph autoencoder (MaskGAE), a self-supervised learning framework for graph-structured data. Different from previous graph autoencoders (GAEs), MaskGAE adopts masked graph modeling (MGM) as a principled pretext task: masking a portion of edges and attempting to reconstruct the missing part with partially visible, unmasked graph structure. To understand whether MGM can help GAEs learn better representations, we provide both theoretical and empirical evidence to justify the benefits of this pretext task. Theoretically, we establish the connections between GAEs and contrastive learning, showing that MGM significantly improves the self-supervised learning scheme of GAEs. Empirically, we conduct extensive experiments on a number of benchmark datasets, demonstrating the superiority of MaskGAE over several state-of-the-arts on both link prediction and node classification tasks. Our code is publicly available at \url{https://github.com/EdisonLeeeee/MaskGAE}.
翻译:我们提出隐藏图形自动编码器(MaskGAE),这是一个自监督的图形结构数据学习框架。与以前的图形自动编码器(GAEs)不同,MaskGAE采用蒙面图形模型(MGM)作为一项有原则的托辞任务:掩盖一部分边缘并试图用部分可见的、无面图结构来重建缺失部分。为了了解MGM能否帮助GAE更好地学习演示,我们提供了理论和经验证据来证明这一托辞任务的好处。理论上,我们建立了GAEs与对比性学习之间的联系,表明MGM大大改进了GAEs自监督的学习计划。我们就一些基准数据集进行了广泛的实验,展示了MaskGAE在链接预测和节点分类任务方面优于若干状态文章的优势。我们的代码在https://github.com/EdisonLeee/MaskGAE}公开查阅。