Aspect-based Sentiment Analysis (ABSA) aims to determine the sentiment polarity towards an aspect. Because of the expensive and limited labelled data, the pretraining strategy has become the de-facto standard for ABSA. However, there always exists severe domain shift between the pretraining and downstream ABSA datasets, hindering the effective knowledge transfer when directly finetuning and making the downstream task performs sub-optimal. To mitigate such domain shift, we introduce a unified alignment pretraining framework into the vanilla pretrain-finetune pipeline with both instance- and knowledge-level alignments. Specifically, we first devise a novel coarse-to-fine retrieval sampling approach to select target domain-related instances from the large-scale pretraining dataset, thus aligning the instances between pretraining and target domains (First Stage). Then, we introduce a knowledge guidance-based strategy to further bridge the domain gap at the knowledge level. In practice, we formulate the model pretrained on the sampled instances into a knowledge guidance model and a learner model, respectively. On the target dataset, we design an on-the-fly teacher-student joint fine-tuning approach to progressively transfer the knowledge from the knowledge guidance model to the learner model (Second Stage). Thereby, the learner model can maintain more domain-invariant knowledge when learning new knowledge from the target dataset. In the Third Stage, the learner model is finetuned to better adapt its learned knowledge to the target dataset. Extensive experiments and analyses on several ABSA benchmarks demonstrate the effectiveness and universality of our proposed pretraining framework. Our source code and models are publicly available at https://github.com/WHU-ZQH/UIKA.
翻译:基于方面的情感分析(ABSA)旨在确定对方面的情感极性。由于标记数据昂贵且有限,预训练策略已经成为ABSA的事实标准。然而,预训练和下游ABSA数据集之间总是存在严重的领域偏移,直接微调时的有效知识转移受到阻碍,从而使下游任务表现不佳。为了减轻这种领域偏移,我们引入了一个统一的对齐预训练框架,其中包括实例级别和知识级别的对齐。具体地,我们首先设计了一种新颖的粗到细的检索采样方法,从大规模预训练数据集中选择与目标领域相关的实例,从而对齐预训练和目标领域之间的实例(第一阶段)。然后,我们介绍了一个基于知识指导的策略,进一步在知识层面上弥合领域差距。在实践中,我们将在采样实例上预训练的模型分别形成一个知识指导模型和一个学习模型。在目标数据集上,我们设计了一种实时的师生联合微调方法,逐步将知识从知识指导模型转移到学习模型中(第二阶段)。因此,当学习新知识时,学习模型可以保持更多领域不变的知识。在第三阶段,学习模型进行微调,以更好地适应其学习到的知识与目标数据集的匹配。在几个ABSA基准测试上进行的广泛实验和分析表明了我们提出的预训练框架的有效性和通用性。我们的源代码和模型可以在https://github.com/WHU-ZQH/UIKA公开获取。