The task of learning a sentiment classification model that adapts well to any target domain, different from the source domain, is a challenging problem. Majority of the existing approaches focus on learning a common representation by leveraging both source and target data during training. In this paper, we introduce a two-stage training procedure that leverages weakly supervised datasets for developing simple lift-and-shift-based predictive models without being exposed to the target domain during the training phase. Experimental results show that transfer with weak supervision from a source domain to various target domains provides performance very close to that obtained via supervised training on the target domain itself.
翻译:学习一种感知分类模式,使其与来源领域不同,适应任何目标领域,这是一个具有挑战性的问题;大多数现有方法侧重于通过在培训期间利用源数据和目标数据,学习共同代表制;在本文件中,我们引入了两阶段培训程序,利用监督薄弱的数据集开发简单的升降和临时预测模型,在培训阶段不接触目标领域;实验结果表明,从来源领域向不同目标领域转移监督薄弱,其绩效与目标领域本身通过监督培训获得的绩效非常接近。