As the fourth largest language family in the world, the Dravidian languages have become a research hotspot in natural language processing (NLP). Although the Dravidian languages contain a large number of languages, there are relatively few public available resources. Besides, text classification task, as a basic task of natural language processing, how to combine it to multiple languages in the Dravidian languages, is still a major difficulty in Dravidian Natural Language Processing. Hence, to address these problems, we proposed a multilingual text classification framework for the Dravidian languages. On the one hand, the framework used the LaBSE pre-trained model as the base model. Aiming at the problem of text information bias in multi-task learning, we propose to use the MLM strategy to select language-specific words, and used adversarial training to perturb them. On the other hand, in view of the problem that the model cannot well recognize and utilize the correlation among languages, we further proposed a language-specific representation module to enrich semantic information for the model. The experimental results demonstrated that the framework we proposed has a significant performance in multilingual text classification tasks with each strategy achieving certain improvements.
翻译:作为世界上第四大语言家庭,德拉维迪亚语言已成为自然语言处理(NLP)的研究热点。虽然德拉维迪亚语言包含大量语言,但公共资源相对较少。此外,作为自然语言处理的一项基本任务,文字分类任务如何与德拉维迪亚语言的多种语言相结合,仍然是Dravidian自然语言处理的一个主要困难。因此,为了解决这些问题,我们提议了德拉维迪亚语言的多语种文本分类框架。一方面,框架使用LABSE预先培训的模式作为基础模式。为了解决多任务学习中文本信息偏差的问题,我们提议使用MLM战略来选择特定语言,并使用对抗性培训来渗透这些语言。另一方面,鉴于该模式无法充分认识和利用各种语言之间的相互关系,我们进一步提议了一个语言专用的表述模块,以丰富该模式的语义信息。实验结果表明,我们提议的框架在多语言文本分类任务中取得了显著的业绩,每项战略都取得了某些改进。