Graph Neural Networks (GNNs) are increasingly becoming the favorite method for graph learning. They exploit the semi-supervised nature of deep learning, and they bypass computational bottlenecks associated with traditional graph learning methods. In addition to the feature matrix $X$, GNNs need an adjacency matrix $A$ to perform feature propagation. In many cases the adjacency matrix $A$ is missing. We introduce a graph construction scheme that construct the adjacency matrix $A$ using unsupervised and supervised information. Unsupervised information characterize the neighborhood around points. We used Principal Axis trees (PA-trees) as a source of unsupervised information, where we create edges between points falling onto the same leaf node. For supervised information, we used the concept of penalty and intrinsic graphs. A penalty graph connects points with different class labels, whereas intrinsic graph connects points with the same class label. We used the penalty and intrinsic graphs to remove or add edges to the graph constructed via PA-tree. This graph construction scheme was tested on two well-known GNNs: 1) Graph Convolutional Network (GCN) and 2) Simple Graph Convolution (SGC). The experiments show that it is better to use SGC because it is faster and delivers better or the same results as GCN. We also test the effect of oversmoothing on both GCN and SGC. We found out that the level of smoothing has to be selected carefully for SGC to avoid oversmoothing.
翻译:图像神经网络( GNNS) 正在日益成为最受欢迎的图表学习方法。 它们利用了半监督的深层学习性质, 并绕过与传统图形学习方法相关的计算瓶颈。 除了特征矩阵 $X$, GNNS 还需要一个匹配矩阵 $A 美元来进行特征传播。 在许多情况下, 缺少相邻矩阵 $A 美元 。 我们引入了一个图形构建计划, 利用不受监督和监管的信息构建相邻矩阵 $A$$ 。 不受监督的信息是点周围周围的特征。 我们使用首席轴心树( PA- Tree ) 作为不受监督的信息来源, 在那里, 我们在位于同一叶节点的点之间创建边缘。 对于监管的信息, 我们使用惩罚和内在图形矩阵矩阵概念来进行匹配。 我们使用惩罚和内在图表来去除或添加通过 PA- Tree 构建的图形的边缘。 这个图形构建计划是在两个众所周知的 CN CONNPS 树( PO- Tree) 中进行了仔细测试, 我们使用SGG GG GG 测试的结果会更快地, 因为我们发现SG GRA 测试的结果。