Aspect-based sentiment analysis (ABSA) is to predict the sentiment polarity towards a particular aspect in a sentence. Recently, this task has been widely addressed by the neural attention mechanism, which computes attention weights to softly select words for generating aspect-specific sentence representations. The attention is expected to concentrate on opinion words for accurate sentiment prediction. However, attention is prone to be distracted by noisy or misleading words, or opinion words from other aspects. In this paper, we propose an alternative hard-selection approach, which determines the start and end positions of the opinion snippet, and selects the words between these two positions for sentiment prediction. Specifically, we learn deep associations between the sentence and aspect, and the long-term dependencies within the sentence by leveraging the pre-trained BERT model. We further detect the opinion snippet by self-critical reinforcement learning. Especially, experimental results demonstrate the effectiveness of our method and prove that our hard-selection approach outperforms soft-selection approaches when handling multi-aspect sentences.
翻译:以视觉为基础的情绪分析(ABSA) 是为了预测对某一句中某个特定方面的情绪极极化。 最近, 神经关注机制对这项任务进行了广泛的处理, 神经关注机制计算了对产生具体部分的句子表示的微选词的注意权重, 预计将集中关注意见文字, 以准确的情绪预测。 但是, 注意力容易被吵闹或误导的词或其他方面的见解文字分散。 在本文中, 我们建议了一种选择硬性方法, 以决定意见片的开始和结束位置, 并选择了这两个立场之间的字眼来进行情绪预测。 具体地说, 我们通过利用经过事先训练的BERT模式, 学习句子中的句子与方面以及长期依赖性之间的深层关联。 我们通过自我批评的强化学习进一步发现意见的片断。 特别是, 实验结果表明我们的方法的有效性, 并证明我们的硬性选择方法在处理多重判决时, 超越了软性选择方法。