Contrastive learning techniques have been widely used in the field of computer vision as a means of augmenting datasets. In this paper, we extend the use of these contrastive learning embeddings to sentiment analysis tasks and demonstrate that fine-tuning on these embeddings provides an improvement over fine-tuning on BERT-based embeddings to achieve higher benchmarks on the task of sentiment analysis when evaluated on the DynaSent dataset. We also explore how our fine-tuned models perform on cross-domain benchmark datasets. Additionally, we explore upsampling techniques to achieve a more balanced class distribution to make further improvements on our benchmark tasks.
翻译:计算机视觉领域广泛采用反向学习技术,作为增强数据集的手段。在本文中,我们将这些对比式学习嵌入方法推广到情感分析任务,并表明对这些嵌入方法的微调比对基于BERT的嵌入方法的微调有所改进,以便在评价DynSent数据集时,对情绪分析任务达到更高的基准。我们还探索我们的微调模型如何在跨域基准数据集上发挥作用。此外,我们探索升级技术,以实现更平衡的班级分布,从而进一步改进我们的基准任务。