Heterogeneous graph neural networks (HGNNs) have powerful capability to embed rich structural and semantic information of a heterogeneous graph into node representations. Existing HGNNs inherit many mechanisms from graph neural networks (GNNs) over homogeneous graphs, especially the attention mechanism and the multi-layer structure. These mechanisms bring excessive complexity, but seldom work studies whether they are really effective on heterogeneous graphs. This paper conducts an in-depth and detailed study of these mechanisms and proposes Simple and Efficient Heterogeneous Graph Neural Network (SeHGNN). To easily capture structural information, SeHGNN pre-computes the neighbor aggregation using a light-weight mean aggregator, which reduces complexity by removing overused neighbor attention and avoiding repeated neighbor aggregation in every training epoch. To better utilize semantic information, SeHGNN adopts the single-layer structure with long metapaths to extend the receptive field, as well as a transformer-based semantic fusion module to fuse features from different metapaths. As a result, SeHGNN exhibits the characteristics of simple network structure, high prediction accuracy, and fast training speed. Extensive experiments on five real-world heterogeneous graphs demonstrate the superiority of SeHGNN over the state-of-the-arts on both accuracy and training speed.
翻译:超异质图形神经网络(HGNNs)具有强大的能力,可以将一个不同图形的丰富结构和语义信息嵌入节点表示中。现有的HGNNs继承了来自同质图形的图形神经网络(GNNs)的许多机制,特别是关注机制和多层结构。这些机制带来了过于复杂的问题,但很少进行关于它们是否真正对异质图形有效的工作研究。本文对这些机制进行了深入和详细的研究,并提出了简单和高效的异质图像神经网络(SeHGNN) 。为了便于获取结构信息,SHGNNN用一个轻量的中值平均聚合器(GNNs)预编集邻居群,通过去除过度使用的邻居注意力和避免在每次培训中反复相邻聚合来降低复杂性。为了更好地利用语义信息,SHNNNNN采用一个单层结构,有很长的元路由来扩展接受场,以及一个基于变压器的语系融合模块,将不同元体的特性融合起来。结果,SHNNNN在五度的网络结构中展示了真实性模型的精确性、高度模型和快速训练速度。