In this study, we propose a feature extraction framework based on contrastive learning with adaptive positive and negative samples (CL-FEFA) that is suitable for unsupervised, supervised, and semi-supervised single-view feature extraction. CL-FEFA constructs adaptively the positive and negative samples from the results of feature extraction, which makes it more appropriate and accurate. Thereafter, the discriminative features are re extracted to according to InfoNCE loss based on previous positive and negative samples, which will make the intra-class samples more compact and the inter-class samples more dispersed. At the same time, using the potential structure information of subspace samples to dynamically construct positive and negative samples can make our framework more robust to noisy data. Furthermore, CL-FEFA considers the mutual information between positive samples, that is, similar samples in potential structures, which provides theoretical support for its advantages in feature extraction. The final numerical experiments prove that the proposed framework has a strong advantage over the traditional feature extraction methods and contrastive learning methods.
翻译:在这项研究中,我们提出了一个基于与适应性正样和负样的对比性学习的特征提取框架(CL-FEFA),适合未经监督、监管和半监督的单一视图特征提取。CL-FEFA根据特征提取结果的正和负样根据适应性优和负样的对比性学习,根据先前的正和负样样本,根据InfoNCE损失,重新提取歧视性特征,这将使类内样本更加紧凑,类间样本更加分散。与此同时,利用子空间样本的潜在结构信息动态地构建正和负样可以使我们的框架更加坚固,以适应吵闹的数据。此外,CL-FEFA考虑正样之间的相互信息,即潜在结构中的类似样本,为特征提取的优势提供理论支持。最后数字实验证明,拟议的框架对传统的特征提取方法和对比式学习方法具有很强的优势。