Graph Neural Network (GNN) is a powerful model to learn representations and make predictions on graph data. Existing efforts on GNN have largely defined the graph convolution as a weighted sum of the features of the connected nodes to form the representation of the target node. Nevertheless, the operation of weighted sum assumes the neighbor nodes are independent of each other, and ignores the possible interactions between them. When such interactions exist, such as the co-occurrence of two neighbor nodes is a strong signal of the target node's characteristics, existing GNN models may fail to capture the signal. In this work, we argue the importance of modeling the interactions between neighbor nodes in GNN. We propose a new graph convolution operator, which augments the weighted sum with pairwise interactions of the representations of neighbor nodes. We term this framework as Bilinear Graph Neural Network (BGNN), which improves GNN representation ability with bilinear interactions between neighbor nodes. In particular, we specify two BGNN models named BGCN and BGAT, based on the well-known GCN and GAT, respectively. Empirical results on three public benchmarks of semi-supervised node classification verify the effectiveness of BGNN --- BGCN (BGAT) outperforms GCN (GAT) by 1.6% (1.5%) in classification accuracy.
翻译:GNN的现有努力在很大程度上将图形变迁定义为连接节点特征的加权总和,以形成目标节点的表示。然而,加权和计算总和的操作假定相邻节点彼此独立,忽视它们之间的可能互动。当存在这种互动时,例如两个相邻节点的共同出现,是目标节点特点的强烈信号,现有的GNNN模型可能无法捕捉信号。在这项工作中,我们提出模拟GNN的相邻节点之间相互作用的重要性。我们提议一个新的图形变迁操作器,通过邻居节点代表的对称互动来增加加权和对称。我们把这个框架称为双线图形神经网络(BGNN),它提高GNN代表能力,使相邻节点之间双线互动。特别是,我们根据众所周知的GCN和GAT的GNT(分别为G-GNT)的GNA和GAT的精确度,分别用G-GNT的PI-G(G-G-NT) 的半G-G-G-G-G-IG-G-IG-IG-IG-IG-NT)的准确性三基准,根据G-G-G-IG-G-G-NT-NT-ID-ICT的三基准,分别不透明(B-G-G-I-G-NT-I-I-I-I-I-G-IG-I)的三基准,核查性结果,核查性结果,分别核查结果,分别核查。