Graph neural networks (GNNs) work well when the graph structure is provided. However, this structure may not always be available in real-world applications. One solution to this problem is to infer a task-specific latent structure and then apply a GNN to the inferred graph. Unfortunately, the space of possible graph structures grows super-exponentially with the number of nodes and so the task-specific supervision may be insufficient for learning both the structure and the GNN parameters. In this work, we propose the Simultaneous Learning of Adjacency and GNN Parameters with Self-supervision, or SLAPS, a method that provides more supervision for inferring a graph structure through self-supervision. A comprehensive experimental study demonstrates that SLAPS scales to large graphs with hundreds of thousands of nodes and outperforms several models that have been proposed to learn a task-specific graph structure on established benchmarks.
翻译:在提供图形结构时,图形神经网络(GNNs)在提供图形结构时运作良好。然而,这种结构不一定总能用于现实世界的应用。这个问题的一个解决办法是推导一个特定任务的潜在结构,然后将GNN应用于推导图。不幸的是,可能的图形结构的空间随着节点的数量而扩大,因此,任务监督可能不足以了解结构和GNN参数。在这项工作中,我们提议采用“同时学习与自我监督或SLAPS的相邻参数和GNN参数”的方法,这种方法为通过自我监督的视图推断图形结构提供了更多的监督。一项综合实验研究表明,SLPS的尺度与数十万个节点的大型图表相比,超越了为学习既定基准上任务特定图表结构而提议的若干模型。