Contrastive learning has achieved remarkable success in representation learning via self-supervision in unsupervised settings. However, effectively adapting contrastive learning to supervised learning tasks remains as a challenge in practice. In this work, we introduce a dual contrastive learning (DualCL) framework that simultaneously learns the features of input samples and the parameters of classifiers in the same space. Specifically, DualCL regards the parameters of the classifiers as augmented samples associating to different labels and then exploits the contrastive learning between the input samples and the augmented samples. Empirical studies on five benchmark text classification datasets and their low-resource version demonstrate the improvement in classification accuracy and confirm the capability of learning discriminative representations of DualCL.
翻译:在未经监督的环境中,通过自我监督进行自我监督的学习在代表性学习方面取得了显著成功,然而,将对比性学习有效地与监督的学习任务相适应,在实践中仍是一项挑战。在这项工作中,我们引入了双重对比性学习框架(DualCL),同时学习输入样本的特点和同一空间分类者的参数。具体地说,“双重控制”将分类者的参数视为与不同标签挂钩的强化样本,然后利用输入样本与强化样本之间的对比性学习。关于五个基准文本分类数据集及其低资源版本的实证研究表明分类准确性有所提高,并证实学习两个CL的歧视性表述的能力。