Aspect-based sentiment analysis (ABSA) aims at predicting sentiment polarity (SC) or extracting opinion span (OE) expressed towards a given aspect. Previous work in ABSA mostly relies on rather complicated aspect-specific feature induction. Recently, pretrained language models (PLMs), e.g., BERT, have been used as context modeling layers to simplify the feature induction structures and achieve state-of-the-art performance. However, such PLM-based context modeling can be not that aspect-specific. Therefore, a key question is left under-explored: how the aspect-specific context can be better modeled through PLMs? To answer the question, we attempt to enhance aspect-specific context modeling with PLM in a non-intrusive manner. We propose three aspect-specific input transformations, namely aspect companion, aspect prompt, and aspect marker. Informed by these transformations, non-intrusive aspect-specific PLMs can be achieved to promote the PLM to pay more attention to the aspect-specific context in a sentence. Additionally, we craft an adversarial benchmark for ABSA (advABSA) to see how aspect-specific modeling can impact model robustness. Extensive experimental results on standard and adversarial benchmarks for SC and OE demonstrate the effectiveness and robustness of the proposed method, yielding new state-of-the-art performance on OE and competitive performance on SC.
翻译:以外观为基础的情绪分析(ABSA)旨在预测情绪极极性或对某一方面表达的观点范围(OE),而ABSA以前的工作主要依赖相当复杂的方面特点诱导。最近,预先培训的语言模型(PLMs),例如BERT,被用作背景模型层,以简化特征诱导结构,实现最新业绩。然而,这种基于PLM的背景模型可能不是这一方面的具体内容。因此,一个关键问题是:如何通过PLMS更好地模拟具体方面的情况?为了回答这个问题,我们试图以非侵扰的方式加强与PLM公司的具体方面背景模型。我们建议采用三种具体方面的投入转换,即侧面的配套、侧面的提示和侧面的标记。通过这些转变,可以实现非约束性的方面特定的PLM(PLM),以便在一个句中更多地关注具体方面的情况。我们为ABSA、ABSA和SABSA的稳健性业绩基准设计一个对抗性基准。我们为ABA(ABA)和SBA-BA-BA-BA-BA-BA-BA-BAR-BS-BA-S-A-A-BA-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-BRI-A-A-A-A-A-BAR-BA-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-B-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-B-B-B-B-B-B-B-B-B-B-B-B-B-B-B-B-B-B-B-B-B-B-B-B-A-B-B-B-A-A-