Emotion detection can provide us with a window into understanding human behavior. Due to the complex dynamics of human emotions, however, constructing annotated datasets to train automated models can be expensive. Thus, we explore the efficacy of cross-lingual approaches that would use data from a source language to build models for emotion detection in a target language. We compare three approaches, namely: i) using inherently multilingual models; ii) translating training data into the target language; and iii) using an automatically tagged parallel corpus. In our study, we consider English as the source language with Arabic and Spanish as target languages. We study the effectiveness of different classification models such as BERT and SVMs trained with different features. Our BERT-based monolingual models that are trained on target language data surpass state-of-the-art (SOTA) by 4% and 5% absolute Jaccard score for Arabic and Spanish respectively. Next, we show that using cross-lingual approaches with English data alone, we can achieve more than 90% and 80% relative effectiveness of the Arabic and Spanish BERT models respectively. Lastly, we use LIME to analyze the challenges of training cross-lingual models for different language pairs
翻译:感官检测可以为我们提供理解人类行为的窗口。 但是,由于人类情感的复杂动态,建造附加说明的数据集以培训自动化模型可能费用高昂。 因此,我们探索了跨语言方法的功效,这些方法将使用源语言的数据来构建一种目标语言的情绪检测模型。 我们比较了三种方法,即:一)使用固有的多语言模型;二)将培训数据转换成目标语言;三)使用自动标记的平行材料;我们的研究认为英语是源语言,阿拉伯语和西班牙语是目标语言。我们研究了不同分类模式的有效性,如BERT和受过不同特征培训的SVMs。我们基于BERT的单语模式,在目标语言数据方面接受培训,其语言数据比阿拉伯语和西班牙语分别高出4%和5%的绝对Jaccard分数。 其次,我们显示,仅使用英语数据使用跨语言方法,我们就能分别实现阿拉伯语和西班牙语BERT模型90%和80%的相对有效性。最后,我们利用LME分析不同语言组合培训跨语言模型的挑战。