In this paper, we investigate how the widely existing contextual and structural divergence may influence the representation learning in rich-text graphs. To this end, we propose Jensen-Shannon Divergence Message-Passing (JSDMP), a new learning paradigm for rich-text graph representation learning. Besides considering similarity regarding structure and text, JSDMP further captures their corresponding dissimilarity by Jensen-Shannon divergence. Similarity and dissimilarity are then jointly used to compute new message weights among text nodes, thus enabling representations to learn with contextual and structural information from truly correlated text nodes. With JSDMP, we propose two novel graph neural networks, namely Divergent message-passing graph convolutional network (DMPGCN) and Divergent message-passing Page-Rank graph neural networks (DMPPRG), for learning representations in rich-text graphs. DMPGCN and DMPPRG have been extensively texted on well-established rich-text datasets and compared with several state-of-the-art baselines. The experimental results show that DMPGCN and DMPPRG can outperform other baselines, demonstrating the effectiveness of the proposed Jensen-Shannon Divergence Message-Passing paradigm
翻译:本文研究了广泛存在的上下文与结构差异如何影响富文本图中的表示学习。为此,我们提出了一种用于富文本图表示学习的新范式——Jensen-Shannon散度消息传递(JSDMP)。除了考虑结构与文本的相似性,JSDMP还通过Jensen-Shannon散度进一步捕捉它们之间的相异性。相似性与相异性随后被共同用于计算文本节点间的新消息权重,从而使表示能够从真正相关的文本节点中学习上下文与结构信息。基于JSDMP,我们提出了两种新颖的图神经网络:差异消息传递图卷积网络(DMPGCN)和差异消息传递PageRank图神经网络(DMPPRG),用于学习富文本图中的表示。DMPGCN和DMPPRG已在成熟的富文本数据集上进行了广泛测试,并与多种先进基线方法进行了比较。实验结果表明,DMPGCN和DMPPRG能够超越其他基线方法,验证了所提出的Jensen-Shannon散度消息传递范式的有效性。