Graph-based multi-view clustering aiming to obtain a partition of data across multiple views, has received considerable attention in recent years. Although great efforts have been made for graph-based multi-view clustering, it remains a challenge to fuse characteristics from various views to learn a common representation for clustering. In this paper, we propose a novel Consistent Multiple Graph Embedding Clustering framework(CMGEC). Specifically, a multiple graph auto-encoder(M-GAE) is designed to flexibly encode the complementary information of multi-view data using a multi-graph attention fusion encoder. To guide the learned common representation maintaining the similarity of the neighboring characteristics in each view, a Multi-view Mutual Information Maximization module(MMIM) is introduced. Furthermore, a graph fusion network(GFN) is devised to explore the relationship among graphs from different views and provide a common consensus graph needed in M-GAE. By jointly training these models, the common latent representation can be obtained which encodes more complementary information from multiple views and depicts data more comprehensively. Experiments on three types of multi-view datasets demonstrate CMGEC outperforms the state-of-the-art clustering methods.
翻译:近些年来,为了将数据分成多种观点,基于图表的多视角集群得到了相当多的关注。虽然为基于图表的多视角集群作出了巨大努力,但整合各种观点的特性以学习共同的集群代表仍然是一个挑战。在本文件中,我们提出了一个新的、统一的多图表嵌入组合框架(CMGEC )。具体地说,一个多图形自动编码器(M-GAE)旨在用多视角聚合聚合编码多视角数据的补充信息。为了指导在每种观点中保持相邻特征的类似性而积累的学习共同代表,引入了一个多视角相互信息最大化模块(MMIM ) 。此外,我们设计了一个图形聚合网络(GFN) 来探索不同观点的图表之间的关系,并提供M-GAE 所需的共同共识图表。通过联合培训这些模型,可以获得共同的潜值代表,从多个观点中编码更多的互补信息,并更全面地描述数据。对三种多视角数据集的三种类型进行了实验,展示了CMGEC的组合方法超越了状态。