Multiview network embedding aims at projecting nodes in the network to low-dimensional vectors, while preserving their multiple relations and attribute information. Contrastive learning approaches have shown promising performance in this task. However, they neglect the semantic consistency between fused and view representations and have difficulty in modeling complementary information between different views. To deal with these deficiencies, this work presents a novel Contrastive leaRning framEwork for Multiview network Embedding (CREME). In our work, different views can be obtained based on the various relations among nodes. Then, we generate view embeddings via proper view encoders and utilize an attentive multiview aggregator to fuse these representations. Particularly, we design two collaborative contrastive objectives, view fusion InfoMax and inter-view InfoMin, to train the model in a self-supervised manner. The former objective distills information from embeddings generated from different views, while the latter captures complementary information among views to promote distinctive view embeddings. We also show that the two objectives can be unified into one objective for model training. Extensive experiments on three real-world datasets demonstrate that our proposed CREME is able to consistently outperform state-of-the-art methods.
翻译:多视图网络嵌入的目的是在网络中将节点投射到低维矢量上,同时保存它们的多重关系和属性信息。 对比学习方法显示,在这项任务中表现良好。 但是,它们忽略了组合和查看演示之间的语义一致性,在建模不同观点之间的补充信息方面有困难。 处理这些缺陷, 这项工作提出了一部新颖的多维网络嵌入( CREME ) 的对比性列列列工作。 在我们的工作中, 可以根据节点之间的各种关系获取不同的观点。 然后, 我们通过适当的视图编码生成嵌入, 并使用一个关注的多视图聚合器将这两个目标结合到这些演示中。 特别是, 我们设计了两个合作对比性的目标, 即查看组合Infax 和视图InfocMin, 以自我监督的方式对模型进行培训。 之前的目标从嵌入了从不同观点中获取的信息, 而后者则根据不同观点获取互补的信息, 以促进不同的视图嵌入。 我们还表明, 这两项目标可以被统一成一个模型培训的目标。 在三个真实世界数据配置方法上, 的大规模实验显示, 持续的CMe- 能够显示, 我们的C-regro- s