Incorporating information from other languages can improve the results of tasks in low-resource languages. A powerful method of building functional natural language processing systems for low-resource languages is to combine multilingual pre-trained representations with cross-lingual transfer learning. In general, however, shared representations are learned separately, either across tasks or across languages. This paper proposes a meta-learning approach for inferring natural language in Persian. Alternately, meta-learning uses different task information (such as QA in Persian) or other language information (such as natural language inference in English). Also, we investigate the role of task augmentation strategy for forming additional high-quality tasks. We evaluate the proposed method using four languages and an auxiliary task. Compared to the baseline approach, the proposed model consistently outperforms it, improving accuracy by roughly six percent. We also examine the effect of finding appropriate initial parameters using zero-shot evaluation and CCA similarity.
翻译:将其他语言的信息纳入其他语言的信息可以改善低资源语言任务的结果。为低资源语言建立功能性自然语言处理系统的有力方法之一是将多语言培训前的表述与跨语言的转移学习相结合。但一般而言,共享表述是在不同任务之间或不同语言之间分别学习的。本文件建议采用元学习方法来推断波斯语的自然语言。代之以,元学习使用不同的任务信息(如波斯语的质询)或其他语言信息(如英语的自然语言推断)。此外,我们还调查任务增强战略在形成更多高质量任务方面的作用。我们用四种语言和辅助任务评估拟议方法。与基线方法相比,拟议模式始终优于该方法,提高了大约6%的准确性。我们还研究了利用零点评价和共同评估类似性找到适当初始参数的效果。