Data augmentation aims to generate new and synthetic features from the original data, which can identify a better representation of data and improve the performance and generalizability of downstream tasks. However, data augmentation for graph-based models remains a challenging problem, as graph data is more complex than traditional data, which consists of two features with different properties: graph topology and node attributes. In this paper, we study the problem of graph data augmentation for Graph Convolutional Network (GCN) in the context of improving the node embeddings for semi-supervised node classification. Specifically, we conduct cosine similarity based cross operation on the original features to create new graph features, including new node attributes and new graph topologies, and we combine them as new pairwise inputs for specific GCNs. Then, we propose an attentional integrating model to weighted sum the hidden node embeddings encoded by these GCNs into the final node embeddings. We also conduct a disparity constraint on these hidden node embeddings when training to ensure that non-redundant information is captured from different features. Experimental results on five real-world datasets show that our method improves the classification accuracy with a clear margin (+2.5% - +84.2%) than the original GCN model.
翻译:增强数据的目的是从原始数据中产生新的和合成的特征,这些特征可以确定更好的数据代表,并改进下游任务的性能和一般性能。然而,基于图形模型的数据增强仍是一个棘手的问题,因为图表数据比传统数据更加复杂,传统数据由两个不同属性的特征组成:图形表层学和节点属性。在本文件中,我们研究在改进半监督节点分类的节点嵌入中图图数据增强问题。具体地说,我们根据原始特征进行交叉操作,以创建新的图形特征,包括新的节点属性和新的图形表层学,并把它们合并为特定GCN的新配对输入。然后,我们提出一个注意集成模型,以加权将这些GCN编码的隐藏节点嵌入最后节点嵌入中。在培训确保从不同特征中采集非重复信息时,我们对这些隐蔽节点嵌入的情况也存在差异限制。在五个实际 2.5 + G2.5 的原始数据的精确度上,我们用实验结果比原始的精确度改进了我们的方法。