Lots of learning tasks require dealing with graph data which contains rich relation information among elements. Modeling physics system, learning molecular fingerprints, predicting protein interface, and classifying diseases require a model to learn from graph inputs. In other domains such as learning from non-structural data like texts and images, reasoning on extracted structures, like the dependency tree of sentences and the scene graph of images, is an important research topic which also needs graph reasoning models. Graph neural networks (GNNs) are connectionist models that capture the dependence of graphs via message passing between the nodes of graphs. Unlike standard neural networks, graph neural networks retain a state that can represent information from its neighborhood with arbitrary depth. Although the primitive GNNs have been found difficult to train for a fixed point, recent advances in network architectures, optimization techniques, and parallel computation have enabled successful learning with them. In recent years, systems based on variants of graph neural networks such as graph convolutional network (GCN), graph attention network (GAT), gated graph neural network (GGNN) have demonstrated ground-breaking performance on many tasks mentioned above. In this survey, we provide a detailed review over existing graph neural network models, systematically categorize the applications, and propose four open problems for future research.
翻译:大量学习任务需要处理含有各元素之间丰富关联信息的图形数据。 建模物理系统、 学习分子指纹、 预测蛋白接口和疾病分类需要一种模型来从图形输入中学习。 在其他领域,比如从文本和图像等非结构性数据学习,对提取结构的推理,如依赖的句子树和图像场图等,是一个重要研究课题,也需要图形推理模型。 图神经网络(GNN)是连接模型,通过图形节点之间的信息传递来捕捉图形的依赖性。 与标准的神经网络不同, 图形神经网络保留了能够任意深度代表其周围信息的状态。 尽管原始的GNN是难以为固定点进行训练的,但网络结构、优化技术和平行计算的最新进展使得它们得以成功学习。 近年来,基于图形神经网络变异的系统,如图形革命网络、 图形关注网(GAT)、 门形神经网络(GGNNNN) 展示了上述许多任务的地面破碎性表现。 在本次调查中,我们系统地审查现有的四个模型。