Graph Neural Networks have emerged as a useful tool to learn on the data by applying additional constraints based on the graph structure. These graphs are often created with assumed intrinsic relations between the entities. In recent years, there have been tremendous improvements in the architecture design, pushing the performance up in various prediction tasks. In general, these neural architectures combine layer depth and node feature aggregation steps. This makes it challenging to analyze the importance of features at various hops and the expressiveness of the neural network layers. As different graph datasets show varying levels of homophily and heterophily in features and class label distribution, it becomes essential to understand which features are important for the prediction tasks without any prior information. In this work, we decouple the node feature aggregation step and depth of graph neural network and introduce several key design strategies for graph neural networks. More specifically, we propose to use softmax as a regularizer and "Soft-Selector" of features aggregated from neighbors at different hop distances; and "Hop-Normalization" over GNN layers. Combining these techniques, we present a simple and shallow model, Feature Selection Graph Neural Network (FSGNN), and show empirically that the proposed model outperforms other state of the art GNN models and achieves up to 64% improvements in accuracy on node classification tasks. Moreover, analyzing the learned soft-selection parameters of the model provides a simple way to study the importance of features in the prediction tasks. Finally, we demonstrate with experiments that the model is scalable for large graphs with millions of nodes and billions of edges.
翻译:神经结构已经成为一个有用的工具,通过应用基于图形结构的额外限制来学习数据。 这些图形往往以假定实体之间的内在关系来创建。 最近几年, 建筑设计有了巨大的改进, 使各种预测任务的性能提升。 一般来说, 这些神经结构将层深度和节点的聚合步骤结合起来。 这使得分析各种跳跃特性的重要性和神经网络层的表达性具有挑战性。 由于不同的图形数据集在特性和类标签分布中表现出不同程度的同质和异质性, 了解哪些特性对于预测任务很重要, 这一点变得至关重要。 在这项工作中, 我们将节点的集合步骤和深度推推高图形神经网络, 并引入了图形神经网络的若干关键设计战略。 更具体地说, 我们提议使用软式的调和“ 软式选择” 模型, 以简单的热距离来计算; 以及 GNNNT 层次上的“ 高压- 平调” 。 把这些技术合并起来, 我们展示了一个简单和浅色的轨道的精确性实验任务, 以模型来显示软和浅色的模型, 显示GNA 的升级的模型。