Graph neural networks (GNNs) have been popularly used in analyzing graph-structured data, showing promising results in various applications such as node classification, link prediction and network recommendation. In this paper, we present a new graph attention neural network, namely GIPA, for attributed graph data learning. GIPA consists of three key components: attention, feature propagation and aggregation. Specifically, the attention component introduces a new multi-layer perceptron based multi-head to generate better non-linear feature mapping and representation than conventional implementations such as dot-product. The propagation component considers not only node features but also edge features, which differs from existing GNNs that merely consider node features. The aggregation component uses a residual connection to generate the final embedding. We evaluate the performance of GIPA using the Open Graph Benchmark proteins (ogbn-proteins for short) dataset. The experimental results reveal that GIPA can beat the state-of-the-art models in terms of prediction accuracy, e.g., GIPA achieves an average ROC-AUC of $0.8700\pm 0.0010$ and outperforms all the previous methods listed in the ogbn-proteins leaderboard.
翻译:在分析图表结构数据时,人们广泛使用图形神经网络(GNNs)来分析图形结构数据,显示节点分类、链接预测和网络建议等各种应用的可喜结果。在本文件中,我们展示了一个新的图式注意神经网络,即GIPA,用于进行可分配的图表数据学习。GIPA由三个关键组成部分组成:注意、特征传播和聚合。具体地说,关注部分引入了一个新的多层次多光谱多头,以产生比点产品等常规执行更好的非线性特征绘图和代表性。传播部分不仅考虑节点特征,而且考虑边缘特征,这些特征与现有的GNNPs仅仅考虑节点特征的不同。聚合组件使用剩余连接来生成最后嵌入的嵌入。我们使用公开图标定蛋白(软质-质用于短时间)数据集来评估GIPA的性能。实验结果表明,GIPA在预测准确性方面可以击得先期模型,例如,GIPA取得0.700美元的平均ROC-AUSprogn-progn。