Cross-domain sentiment analysis aims to predict the sentiment of texts in the target domain using the model trained on the source domain to cope with the scarcity of labeled data. Previous studies are mostly cross-entropy-based methods for the task, which suffer from instability and poor generalization. In this paper, we explore contrastive learning on the cross-domain sentiment analysis task. We propose a modified contrastive objective with in-batch negative samples so that the sentence representations from the same class will be pushed close while those from the different classes become further apart in the latent space. Experiments on two widely used datasets show that our model can achieve state-of-the-art performance in both cross-domain and multi-domain sentiment analysis tasks. Meanwhile, visualizations demonstrate the effectiveness of transferring knowledge learned in the source domain to the target domain and the adversarial test verifies the robustness of our model.
翻译:跨部情绪分析旨在预测目标领域文本的情绪,使用在源域方面受过培训的模型来应对标签数据稀缺的情况。以前的研究大多是跨部性研究方法,任务不稳和概括性不强。在本文中,我们探讨了跨部情绪分析任务方面的对比性学习。我们提出了一个修改的对比性目标,配有批次负样本,以便同一类的判刑表述将被推近,而来自不同类别的人在潜伏空间中进一步分离。对两个广泛使用的数据集的实验表明,我们的模型可以在跨部和多部情绪分析任务中达到最先进的性能。与此同时,视觉化显示了将源域所学知识转让到目标领域的有效性,而对抗性测试则证实了我们模型的坚固性。