The wealth of text data generated by social media has enabled new kinds of analysis of emotions with language models. These models are often trained on small and costly datasets of text annotations produced by readers who guess the emotions expressed by others in social media posts. This affects the quality of emotion identification methods due to training data size limitations and noise in the production of labels used in model development. We present LEIA, a model for emotion identification in text that has been trained on a dataset of more than 6 million posts with self-annotated emotion labels for happiness, affection, sadness, anger, and fear. LEIA is based on a word masking method that enhances the learning of emotion words during model pre-training. LEIA achieves macro-F1 values of approximately 73 on three in-domain test datasets, outperforming other supervised and unsupervised methods in a strong benchmark that shows that LEIA generalizes across posts, users, and time periods. We further perform an out-of-domain evaluation on five different datasets of social media and other sources, showing LEIA's robust performance across media, data collection methods, and annotation schemes. Our results show that LEIA generalizes its classification of anger, happiness, and sadness beyond the domain it was trained on. LEIA can be applied in future research to provide better identification of emotions in text from the perspective of the writer. The models produced for this article are publicly available at https://huggingface.co/LEIA
翻译:社交媒体产生的数据量使得情感分析与语言模型的新型分析变得可能。但这些模型通常在小型的高昂成本的数据集上进行训练,这会影响由于训练数据大小限制以及模型开发中使用的标签产生的噪声,这些标签是由读者猜测他人在社交媒体帖子中表达的情感所生产的,进而影响情感识别方法的质量。我们提出了LEIA,一种文本情感识别模型,它在一个包含6百万条带有自注释情感标签(快乐,亲近,悲伤,愤怒和恐惧)的帖子数据集上进行训练。LEIA基于一种单词遮掩算法,通过增强情感词汇的学习来进行模型预训练。LEIA在三个领域测试数据集上取得了大约73的宏F1值,优于其他有监督和无监督方法,在一个强有力的基准测试中展现出LEIA可以跨越帖子、用户和时间周期实现泛化。我们还在五个不同的社交媒体和其他来源的数据集进行了领域外评估,展示了LEIA在媒体、数据收集方法和注释方案上的稳健性能。我们的结果表明,LEIA在培训域之外也可以对愤怒、快乐和悲伤的分类进行推广。LEIA可用于将来的研究中,从作者的角度提供更好的文本情感识别。本文所产生的模型可在https://huggingface.co/LEIA上公开获取。