Multi-view feature extraction is an efficient approach for alleviating the issue of dimensionality in highdimensional multi-view data. Contrastive learning (CL), which is a popular self-supervised learning method, has recently attracted considerable attention. Most CL-based methods were constructed only from the sample level. In this study, we propose a novel multiview feature extraction method based on dual contrastive head, which introduce structural-level contrastive loss into sample-level CL-based method. Structural-level CL push the potential subspace structures consistent in any two cross views, which assists sample-level CL to extract discriminative features more effectively. Furthermore, it is proven that the relationships between structural-level CL and mutual information and probabilistic intraand inter-scatter, which provides the theoretical support for the excellent performance. Finally, numerical experiments on six real datasets demonstrate the superior performance of the proposed method compared to existing methods.
翻译:多视图特征提取是缓解高维多视图数据中维度问题的高效方法。 反向学习(CL)是一种流行的自我监督的学习方法,最近引起了相当大的关注。 多数基于CL的方法都是从抽样一级构建的。 在这项研究中,我们提出了一个基于双面对比头的新颖的多视图特征提取方法,它将结构层次的对比性损失引入基于样本层次的CL方法。 结构层次的CL将潜在的子空间结构推向任何两个交叉视图中的一致性,它有助于样本一级的CL更有效地提取歧视性特征。此外,事实证明,结构层次的CL和相互信息之间的关系以及概率性内部和分层之间的关联,为出色的业绩提供了理论支持。 最后,对六个真实数据集进行的数字实验显示了拟议方法与现有方法相比的优异性。