Aspect-based sentiment analysis(ABSA) is a textual analysis methodology that defines the polarity of opinions on certain aspects related to specific targets. The majority of research on ABSA is in English, with a small amount of work available in Arabic. Most previous Arabic research has relied on deep learning models that depend primarily on context-independent word embeddings (e.g.word2vec), where each word has a fixed representation independent of its context. This article explores the modeling capabilities of contextual embeddings from pre-trained language models, such as BERT, and making use of sentence pair input on Arabic ABSA tasks. In particular, we are building a simple but effective BERT-based neural baseline to handle this task. Our BERT architecture with a simple linear classification layer surpassed the state-of-the-art works, according to the experimental results on the benchmarked Arabic hotel reviews dataset.
翻译:基于外观的情绪分析(ABSA)是一种文字分析方法,它界定了对与具体目标有关的某些方面的观点的极性。关于ABSA的研究大多以英文进行,只有少量工作用阿拉伯文进行。大多数先前的阿拉伯研究都依赖于深层次的学习模式,这些模式主要依赖基于背景的单词嵌入(例如word2vec),其中每个词都有固定的表示,独立于其上下文。本文章探讨了背景嵌入与预先培训的语言模型(例如BERT)的模型能力,并利用阿拉伯文的ABSA任务对句投入。特别是,我们正在建立一个简单而有效的基于BERT的神经基线来应对这项任务。根据基准阿拉伯酒店审查数据集的实验结果,我们具有简单线性分类层的BERT结构超过了最先进的工作。