Emotion detection is of great importance for understanding humans. Constructing annotated datasets to train automated models can be expensive. We explore the efficacy of cross-lingual approaches that would use data from a source language to build models for emotion detection in a target language. We compare three approaches, namely: i) using inherently multilingual models; ii) translating training data into the target language; and iii) using an automatically tagged parallel corpus. In our study, we consider English as the source language with Arabic and Spanish as target languages. We study the effectiveness of different classification models such as BERT and SVMs trained with different features. Our BERT-based monolingual models that are trained on target language data surpass state-of-the-art (SOTA) by 4% and 5% absolute Jaccard score for Arabic and Spanish respectively. Next, we show that using cross-lingual approaches with English data alone, we can achieve more than 90% and 80% relative effectiveness of the Arabic and Spanish BERT models respectively. Lastly, we use LIME to interpret the differences between models.
翻译:感官检测对于理解人类非常重要。 建立附加说明的数据集以培训自动化模型可能费用高昂。 我们探索使用源语言数据建立目标语言情感检测模型的跨语言方法的功效。 我们比较了三种方法,即: (一) 使用固有的多语种模型; (二) 将培训数据转换成目标语言; (三) 使用自动标记的平行材料。 在我们的研究中,我们将英语视为源语言,阿拉伯语和西班牙语视为目标语言。 我们研究了不同分类模型的有效性,如BERT和受过不同特点培训的SVMs。 我们基于BERT的单语模型,其目标语言数据培训的阿拉伯文和西班牙语的绝对雅克卡分数分别比标准(SOTA)高出4%和5%。 其次,我们显示,仅使用英语数据跨语言的方法,我们就能分别实现阿拉伯语和西班牙语BERT模型的90%和80%的相对有效性。 最后,我们使用LIME来解释模型之间的差异。