A sufficient amount of annotated data is required to fine-tune pre-trained language models for downstream tasks. Unfortunately, attaining labeled data can be costly, especially for multiple language varieties/dialects. We propose to self-train pre-trained language models in zero- and few-shot scenarios to improve the performance on data-scarce dialects using only resources from data-rich ones. We demonstrate the utility of our approach in the context of Arabic sequence labeling by using a language model fine-tuned on Modern Standard Arabic (MSA) only to predict named entities (NE) and part-of-speech (POS) tags on several dialectal Arabic (DA) varieties. We show that self-training is indeed powerful, improving zero-shot MSA-to-DA transfer by as large as \texttildelow 10\% F$_1$ (NER) and 2\% accuracy (POS tagging). We acquire even better performance in few-shot scenarios with limited labeled data. We conduct an ablation experiment and show that the performance boost observed directly results from the unlabeled DA examples for self-training and opens up opportunities for developing DA models exploiting only MSA resources. Our approach can also be extended to other languages and tasks.
翻译:需要足够多的附加说明数据,才能微调经过培训的下游任务的语言模式。 不幸的是,获得贴标签的数据可能成本高昂,特别是多语言品种/方程式。我们提议在零和少发情景下自我培训经过培训的预先培训的语言模式,只使用数据丰富的资源来改进数据偏差方言的性能。我们用对现代标准阿拉伯文(MSA)进行微调的语言模式来预测一些方言(NE)和部分语音(POS)标志,从而证明我们的方法在阿拉伯顺序标签方面的有用性。我们进行模拟试验并显示,从未加标签的DA实例中直接观察到的自我培训和开发其他语言的实绩提升,也为开发DA模式开辟机会。