The majority of work in targeted sentiment analysis has concentrated on finding better methods to improve the overall results. Within this paper we show that these models are not robust to linguistic phenomena, specifically negation and speculation. In this paper, we propose a multi-task learning method to incorporate information from syntactic and semantic auxiliary tasks, including negation and speculation scope detection, to create models that are more robust to these phenomena. Further we create two challenge datasets to evaluate model performance on negated and speculative samples. We find that multi-task models and transfer learning from a language model can improve performance on these challenge datasets. However the results indicate that there is still much room for improvement in making our models more robust to linguistic phenomena such as negation and speculation.
翻译:定向情绪分析的大部分工作集中于寻找更好的方法来改善总体结果。在本文中,我们发现这些模型对语言现象,特别是否定和推测现象并不健全。我们在本文件中建议采用多任务学习方法,纳入来自综合和语义辅助任务的信息,包括否定和投机范围探测,以创建对于这些现象更强大的模型。我们还创建了两个挑战数据集来评价否定和投机样本的模型性能。我们发现,多任务模型和从语言模型中传授的学习可以改善这些挑战数据集的性能。但是,结果显示,在使我们的模型对否定和投机等语言现象更加强大方面,仍有很大的改进余地。