Aspect-based sentiment analysis (ABSA) is a fine-grained sentiment analysis task which aims to extract the aspects from sentences and identify their corresponding sentiments. Aspect term extraction (ATE) is the crucial step for ABSA. Due to the expensive annotation for aspect terms, we often lack labeled target domain data for fine-tuning. To address this problem, many approaches have been proposed recently to transfer common knowledge in an unsupervised way, but such methods have too many modules and require expensive multi-stage preprocessing. In this paper, we propose a simple but effective technique based on mutual information maximization, which can serve as an additional component to enhance any kind of model for cross-domain ABSA and ATE. Furthermore, we provide some analysis of this approach. Experiment results show that our proposed method outperforms the state-of-the-art methods for cross-domain ABSA by 4.32% Micro-F1 on average over 10 different domain pairs. Apart from that, our method can be extended to other sequence labeling tasks, such as named entity recognition (NER).
翻译:以外观为基础的情绪分析(ABSA)是一项精细的情绪分析任务,旨在从判决中提取内容,并辨别相应的情绪。外观提取(ATE)是ABSA的关键步骤。由于对侧面术语的注释昂贵,我们常常缺乏贴标签的目标域数据进行微调。为了解决这个问题,最近提出了许多方法,以不受监督的方式转让共同知识,但这类方法有太多的模块,需要花费多阶段的预处理。在本文中,我们提出了一个基于相互信息最大化的简单而有效的技术,它可以作为增强跨域ABSA和ATE的任何模式的附加组成部分。此外,我们提供了对这种方法的一些分析。实验结果显示,我们所提议的方法在10对不同域对域的平均比例上超过4.32%微-F1,比跨域的最新方法高出4.32%。除此之外,我们的方法可以扩大到其他排序任务,例如名称实体识别(NER)。