Recent studies have highlighted the limitations of message-passing based graph neural networks (GNNs), e.g., limited model expressiveness, over-smoothing, over-squashing, etc. To alleviate these issues, Graph Transformers (GTs) have been proposed which work in the paradigm that allows message passing to a larger coverage even across the whole graph. Hinging on the global range attention mechanism, GTs have shown a superpower for representation learning on homogeneous graphs. However, the investigation of GTs on heterogeneous information networks (HINs) is still under-exploited. In particular, on account of the existence of heterogeneity, HINs show distinct data characteristics and thus require different treatment. To bridge this gap, in this paper we investigate the representation learning on HINs with Graph Transformer, and propose a novel model named HINormer, which capitalizes on a larger-range aggregation mechanism for node representation learning. In particular, assisted by two major modules, i.e., a local structure encoder and a heterogeneous relation encoder, HINormer can capture both the structural and heterogeneous information of nodes on HINs for comprehensive node representations. We conduct extensive experiments on four HIN benchmark datasets, which demonstrate that our proposed model can outperform the state-of-the-art.
翻译:最近的研究表明了基于信息传递的图形神经网络(GNNs)的局限性,例如,模型表达性有限、过度透透、过度夸大等等。为了缓解这些问题,提出了“图形变换器(GTs)”在模式中发挥作用,使信息能够传播到甚至整个图层的更大范围。在全球范围关注机制下,GTs展示了一种超强的超强力量,用于在同质图形上进行代表学习。然而,对不同信息网络的GTs的调查仍然没有得到充分利用。特别是,由于存在异质性,HINs显示不同的数据特征,因此需要不同的处理。为了缩小这一差距,我们在本文件中调查了在HINs上与图形变异器的代言学习,并提出了一个名为HINmer的新模式,该模式利用一个更宽广的组合机制进行结点代表学习。特别是,在两个主要模块的协助下,即一个本地结构,即本地结构,一个互不兼容性关系模型,HIN显示不同的数据特性特征,因此,HINermerde 能够从我们提出的四个模型中采集广泛的结构和模型。