Multilingual transformer language models have recently attracted much attention from researchers and are used in cross-lingual transfer learning for many NLP tasks such as text classification and named entity recognition. However, similar methods for transfer learning from monolingual text to code-switched text have not been extensively explored mainly due to the following challenges: (1) Code-switched corpus, unlike monolingual corpus, consists of more than one language and existing methods can't be applied efficiently, (2) Code-switched corpus is usually made of resource-rich and low-resource languages and upon using multilingual pre-trained language models, the final model might bias towards resource-rich language. In this paper, we focus on code-switched sentiment analysis where we have a labelled resource-rich language dataset and unlabelled code-switched data. We propose a framework that takes the distinction between resource-rich and low-resource language into account. Instead of training on the entire code-switched corpus at once, we create buckets based on the fraction of words in the resource-rich language and progressively train from resource-rich language dominated samples to low-resource language dominated samples. Extensive experiments across multiple language pairs demonstrate that progressive training helps low-resource language dominated samples.
翻译:多种语文变压器语言模式最近引起了研究人员的极大关注,并被用于许多非语言语言语言(如文本分类和名称实体识别)的跨语言转移学习,例如文本分类和名称实体识别。然而,由于下列挑战,没有广泛探讨从单一语言文本向编码转换文本的类似方法:(1) 代码转换软件,不同于单一语言版本,由不止一种语言组成,现有方法无法有效应用;(2) 代码转换软件通常由资源丰富和低资源语言组成,并在使用多语言预先培训的多语言模式后,最终模式可能偏向于资源丰富的语言。在本文中,我们侧重于代码转换的情绪分析,即我们拥有资源丰富语言的标签数据集和未贴标签的编码转换数据。我们提出了一个框架,将资源丰富语言与低资源语言区分开来加以考虑。我们没有就整个代码转换软件进行一次性培训,而是根据资源丰富语言的词数和从资源丰富语言主导语言样本逐步培训到低资源主导语言的样本来创建桶。