The majority of work in targeted sentiment analysis has concentrated on finding better methods to improve the overall results. Within this paper we show that these models are not robust to linguistic phenomena, specifically negation and speculation. In this paper, we propose a multi-task learning method to incorporate information from syntactic and semantic auxiliary tasks, including negation and speculation scope detection, to create English-language models that are more robust to these phenomena. Further we create two challenge datasets to evaluate model performance on negated and speculative samples. We find that multi-task models and transfer learning via language modelling can improve performance on these challenge datasets, but the overall performances indicate that there is still much room for improvement. We release both the datasets and the source code at https://github.com/jerbarnes/multitask_negation_for_targeted_sentiment.
翻译:定向情绪分析的大部分工作都集中在寻找更好的方法来改善总体结果上。在本文件中,我们发现这些模型对语言现象,特别是否定和猜测并不健全。我们在本文件中建议采用多任务学习方法,纳入来自综合和语义辅助任务的信息,包括否定和投机范围探测,以创建对这些现象更可靠的英语模型。我们还创建了两个挑战数据集,以评价否定和投机样本的模型性能。我们发现,多任务模型和通过语言建模转移学习可以改善挑战数据集的性能,但总体表现表明仍有很大的改进余地。我们在 https://github.com/jerbarnes/Multitask_ness_for_targin_sentiment上发布数据集和源代码。