Graph Neural Networks (GNNs) have shown tremendous strides in performance for graph-structured problems especially in the domains of natural language processing, computer vision and recommender systems. Inspired by the success of the transformer architecture, there has been an ever-growing body of work on attention variants of GNNs attempting to advance the state of the art in many of these problems. Incorporating "attention" into graph mining has been viewed as a way to overcome the noisiness, heterogenity and complexity associated with graph-structured data as well as to encode soft-inductive bias. It is hence crucial and advantageous to study these variants from a bird's-eye view to assess their strengths and weaknesses. We provide a systematic and focused tutorial centered around attention based GNNs in a hope to benefit researchers dealing with graph-structured problems. Our tutorial looks at GNN variants from the point of view of the attention function and iteratively builds the reader's understanding of different graph attention variants.
翻译:图表神经网络(GNNs)在图形结构问题的表现方面,特别是在自然语言处理、计算机视觉和建议系统领域,显示出了巨大的进步。在变压器结构的成功激励下,关于试图在许多这些问题中提高最新水平的GNNs关注变体的工作越来越多。将“注意”纳入图形采矿被视为克服与图形结构数据有关的共性、异性和复杂性以及编码软感偏差的一种方法。因此,从鸟眼角度研究这些变异体以评估其长处和短处至关重要和有利。我们以基于GNNNs的关注为中心提供系统而集中的辅导,希望有利于处理图形结构问题的研究人员。我们从注意功能的角度对GNN变量进行辅导,反复构建读者对不同图形关注变体的理解。