Graph Neural Networks (GNNs) is an architecture for structural data, and has been adopted in a mass of tasks and achieved fabulous results, such as link prediction, node classification, graph classification and so on. Generally, for a certain node in a given graph, a traditional GNN layer can be regarded as an aggregation from one-hop neighbors, thus a set of stacked layers are able to fetch and update node status within multi-hops. For nodes with sparse connectivity, it is difficult to obtain enough information through a single GNN layer as not only there are only few nodes directly connected to them but also can not propagate the high-order neighbor information. However, as the number of layer increases, the GNN model is prone to over-smooth for nodes with the dense connectivity, which resulting in the decrease of accuracy. To tackle this issue, in this thesis, we define a novel framework that allows the normal GNN model to accommodate more layers. Specifically, a node-degree based gate is employed to adjust weight of layers dynamically, that try to enhance the information aggregation ability and reduce the probability of over-smoothing. Experimental results show that our proposed model can effectively increase the model depth and perform well on several datasets.
翻译:图像神经网络( GNNs) 是结构数据架构, 以大量任务的形式被采纳, 并取得了惊人的结果, 例如链接预测、 节点分类、 图形分类等等。 一般来说, 对于特定图形中的某个节点, 传统的 GNN 层可以被视为来自一线邻居的集合, 因此一组堆叠的层可以在多点内获取和更新节点状态。 对于连接稀少的节点, 很难通过一个单一的 GNN 层获得足够的信息, 因为不仅只有很少的节点直接连接它们, 而且无法传播高端邻居信息。 然而, 随着层数的增加, GNNN 模型很容易在高端连接的节点上过度悬浮, 从而导致精确性下降 。 为了解决这个问题, 我们在这个理论中定义了一个新框架, 使得普通 GNNNN 模型能够容纳更多的层。 具体地说, 基于节点的门被用于动态调整层的重量, 试图增强信息汇总能力, 并降低模型在多层深度上的概率 。 实验结果能够有效地显示我们提议的模型的运行。