In the last decade or so, we have witnessed deep learning reinvigorating the machine learning field. It has solved many problems in the domains of computer vision, speech recognition, natural language processing, and various other tasks with state-of-the-art performance. The data is generally represented in the Euclidean space in these domains. Various other domains conform to non-Euclidean space, for which graph is an ideal representation. Graphs are suitable for representing the dependencies and interrelationships between various entities. Traditionally, handcrafted features for graphs are incapable of providing the necessary inference for various tasks from this complex data representation. Recently, there is an emergence of employing various advances in deep learning to graph data-based tasks. This article provides a comprehensive survey of graph neural networks (GNNs) in each learning setting: supervised, unsupervised, semi-supervised, and self-supervised learning. Taxonomy of each graph based learning setting is provided with logical divisions of methods falling in the given learning setting. The approaches for each learning task are analyzed from both theoretical as well as empirical standpoints. Further, we provide general architecture guidelines for building GNNs. Various applications and benchmark datasets are also provided, along with open challenges still plaguing the general applicability of GNNs.
翻译:近十年来,我们目睹了机器学习领域的深层次学习振兴,解决了计算机视野、语音识别、自然语言处理和各种其它最先进工作领域的许多问题,解决了计算机视觉、语音识别、自然语言处理等领域的许多问题,并解决了这些方面最先进的工作。这些数据一般地体现在欧几里德空间中。其他各种领域都符合非欧几里德空间,而图是一个理想的表达方式。图表适合于代表不同实体之间的依赖性和相互关系。传统上,图表的手工制作特征无法从这一复杂的数据代表形式中为各种任务提供必要的推断。最近,在深度学习中出现了各种进步,以绘制基于数据的任务。这一文章对每个学习环境中的图形神经网络(GNNs)进行了全面的调查:监督、不受监督、半监督和自我监督、自我监督的学习。每个基于图表的学习环境的分类方法在给定的学习环境中都有逻辑差异。每个学习任务的方法都从理论角度和实证角度加以分析。此外,我们还提供了通用的GNNS应用架构,根据一般数据设置。