Graph Convolutional Network (GCN) has achieved extraordinary success in learning effective task-specific representations of nodes in graphs. However, regarding Heterogeneous Information Network (HIN), existing HIN-oriented GCN methods still suffer from two deficiencies: (1) they cannot flexibly explore all possible meta-paths and extract the most useful ones for a target object, which hinders both effectiveness and interpretability; (2) they often need to generate intermediate meta-path based dense graphs, which leads to high computational complexity. To address the above issues, we propose an interpretable and efficient Heterogeneous Graph Convolutional Network (ie-HGCN) to learn the representations of objects in HINs. It is designed as a hierarchical aggregation architecture, i.e., object-level aggregation first, followed by type-level aggregation. The novel architecture can automatically extract useful meta-paths for each object from all possible meta-paths (within a length limit), which brings good model interpretability. It can also reduce the computational cost by avoiding intermediate HIN transformation and neighborhood attention. We provide theoretical analysis about the proposed ie-HGCN in terms of evaluating the usefulness of all possible meta-paths, its connection to the spectral graph convolution on HINs, and its quasi-linear time complexity. Extensive experiments on three real network datasets demonstrate the superiority of ie-HGCN over the state-of-the-art methods.
翻译:在通过图表来了解各节点在具体任务上的有效表现方面取得了非凡的成功,然而,关于异质信息网络(HIN),目前面向HIN的GCN方法仍然有两个缺陷:(1) 它们不能灵活地探索所有可能的元路径,并为目标对象提取最有用的元路径,从而妨碍有效性和可解释性;(2) 它们往往需要生成中间的基于元路径的密度图,从而导致计算复杂性高;为了解决上述问题,我们建议建立一个可解释的、高效的异质相向图像网络(i-HGCN),以了解HINs中对象的表示;它的设计是作为一个等级组合结构,即目标级集合,然后是类型级集合;新结构可以自动地从所有可能的元路径图(在一定的长度范围内)中提取对每个对象有用的元路径;为了解决上述问题,我们建议建立一个可解释的、高效的超异质性图表相联网(i-HGCN)来了解HIN的物体的表示;我们从评估其真实性、真实性、真实性、真实性、真实性GMIS的网络联系的角度分析其所有可能具有的频率的数据。