Aspect-based sentiment analysis (ABSA) task aim at associating a piece of text with a set of aspects and meanwhile infer their respective sentimental polarities. The state-of-the-art approaches are built upon fine-tuning of various pre-trained language models. They commonly attempt to learn aspect-specific representation from the corpus. Unfortunately, the aspect is often expressed implicitly through a set of representatives and thus renders implicit mapping process unattainable unless sufficient labeled examples are available. However, high-quality labeled examples may not be readily available in real-world scenarios. In this paper, we propose to jointly address aspect categorization and aspect-based sentiment subtasks in a unified framework. Specifically, we first introduce a simple but effective mechanism to construct an auxiliary-sentence for the implicit aspect based on the semantic information in the corpus. Then, we encourage BERT to learn the aspect-specific representation in response to the automatically constructed auxiliary-sentence instead of the aspect itself. Finally, we empirically evaluate the performance of the proposed solution by a comparative study on real benchmark datasets for both ABSA and Targeted-ABSA tasks. Our extensive experiments show that it consistently achieves state-of-the-art performance in terms of aspect categorization and aspect-based sentiment across all datasets and the improvement margins are considerable. The code of BERT-ASC is available in GitHub: https://github.com/amurtadha/BERT-ASC.
翻译:以外观为基础的情绪分析(ABSA)任务旨在将文本与一组方面联系起来,同时推断各自的情感两极分化。最先进的方法建立在对各种经过预先训练的语言模式进行微调的基础上。它们通常试图从本体中学习特定方面的代表性。不幸的是,这一方面往往通过一组代表暗含地表达,从而使得暗含的绘图进程无法实现,除非有足够的标签实例。然而,在现实世界情景中,可能无法轻易获得高质量的标签实例。在本文件中,我们提议在一个统一的框架内,共同处理方方面面的分类和基于方方面面的情绪子任务。具体地说,我们首先引入一个简单而有效的机制,根据本体中的语义信息,为隐含的方面构建辅助性说明。然后,我们鼓励BERT在应对自动构建的辅助作用而不是方面本身时学习方方面的具体方面的代表性。最后,我们通过对ABSA和目标方-ABSA任务的实际基准数据设置进行比较研究,对拟议解决办法的绩效进行实证性评估。我们广泛进行的业绩实验,在BSISC/OB-BS-A的底底底底线上的所有数据分类法方面,它的所有业绩都显示它始终实现大量的改进。