Lots of learning tasks require dealing with graph data which contains rich relation information among elements. Modeling physics systems, learning molecular fingerprints, predicting protein interface, and classifying diseases demand a model to learn from graph inputs. In other domains such as learning from non-structural data like texts and images, reasoning on extracted structures (like the dependency trees of sentences and the scene graphs of images) is an important research topic which also needs graph reasoning models. Graph neural networks (GNNs) are neural models that capture the dependence of graphs via message passing between the nodes of graphs. In recent years, variants of GNNs such as graph convolutional network (GCN), graph attention network (GAT), graph recurrent network (GRN) have demonstrated ground-breaking performances on many deep learning tasks. In this survey, we propose a general design pipeline for GNN models and discuss the variants of each component, systematically categorize the applications, and propose four open problems for future research.
翻译:许多学习任务都需要处理含有各元素之间丰富关联信息的图表数据。 建模物理系统、 学习分子指纹、 预测蛋白质界面和疾病分类需要一种模型来从图形输入中学习。 在其他领域,比如从文本和图像等非结构性数据中学习,关于抽取结构的推理( 如判决的依附树和图像的场景图)是一个重要研究课题,也需要图形推理模型。 图形神经网络( GNNs)是神经模型,通过图形节点之间传递的信息来捕捉图形依赖。 近年来,GNNs的变体,如图集网络、图形注意网络、图集经常性网络( GAT)等,显示了许多深层学习任务中的突破性表现。 在本次调查中,我们为GNN模型提议了一个通用设计管道,讨论每个元素的变体,系统分类应用,并提出未来研究的四个开放问题。