Aspect-based sentiment analysis(ABSA) is a textual analysis methodology that defines the polarity of opinions on certain aspects related to specific targets. The majority of research on ABSA is in English, with a small amount of work available in Arabic. Most previous Arabic research has relied on deep learning models that depend primarily on context-independent word embeddings (e.g.word2vec), where each word has a fixed representation independent of its context. This article explores the modeling capabilities of contextual embeddings from pre-trained language models, such as BERT, and making use of sentence pair input on Arabic aspect sentiment polarity classification task. In particular, we develop a simple but effective BERT-based neural baseline to handle this task. Our BERT architecture with a simple linear classification layer surpassed the state-of-the-art works, according to the experimental results on three different Arabic datasets. Achieving an accuracy of 89.51% on the Arabic hotel reviews dataset, 73% on the Human annotated book reviews dataset, and 85.73% on the Arabic news dataset.
翻译:基于外观的情绪分析(ABSA)是一种文字分析方法,它界定了对与具体目标有关的某些方面的观点的极性。关于ABSA的研究大多以英语进行,只有少量工作用阿拉伯语进行。大多数先前的阿拉伯研究都依赖于深层次的学习模式,这些模式主要依赖基于背景的单词嵌入(例如word2vec),其中每个词都有固定的表达方式,而不受其上下文的影响。本文章探讨了从预先培训的语言模型(如BERT)中嵌入背景的模型能力,并使用了阿拉伯语方面感知极性分类任务的对口投入。特别是,我们制定了一个简单而有效的基于BERT的神经基线来应对这项任务。根据三个不同的阿拉伯数据集的实验结果,我们具有简单线性分类层的BERT结构超过了最先进的工作。在阿拉伯旅馆审查数据集中实现了89.51%的精确度,在人类附加说明的书籍中实现了73%的精确度,在阿拉伯新闻数据集中实现了85.73%的精确度。