Multilingual models, such as M-BERT and XLM-R, have gained increasing popularity, due to their zero-shot cross-lingual transfer learning capabilities. However, their generalization ability is still inconsistent for typologically diverse languages and across different benchmarks. Recently, meta-learning has garnered attention as a promising technique for enhancing transfer learning under low-resource scenarios: particularly for cross-lingual transfer in Natural Language Understanding (NLU). In this work, we propose X-METRA-ADA, a cross-lingual MEta-TRAnsfer learning ADAptation approach for NLU. Our approach adapts MAML, an optimization-based meta-learning approach, to learn to adapt to new languages. We extensively evaluate our framework on two challenging cross-lingual NLU tasks: multilingual task-oriented dialog and typologically diverse question answering. We show that our approach outperforms naive fine-tuning, reaching competitive performance on both tasks for most languages. Our analysis reveals that X-METRA-ADA can leverage limited data for faster adaptation.
翻译:诸如M-BERT和XLM-R等多种语言模式,由于其零点跨语言的跨语言转让学习能力,越来越受到欢迎。然而,这些模式的普及能力对于典型的多种语言和不同基准来说仍然不一致。最近,元学习作为一种在低资源情景下加强转让学习的有希望的技术,在低资源情景下,特别是在跨语言的自然语言理解(NLU)的转让方面,已获得关注。在这项工作中,我们为NLU提出一种跨语言的METRA-ADA,一种跨语言的METADADA学习适应方法。我们的方法是调整MAML,一种基于优化的元学习方法,以学习适应新的语言。我们广泛评价了我们关于两种具有挑战性的跨语言的NLU任务的框架:多语言任务对话和典型的多样化问题回答。我们表明,我们的方法比大多数语言的天真精细的调整,在这两个任务上达到竞争性的表现。我们的分析表明,X-METRADADADA可以利用有限的数据加快适应。