We consider two problems of NMT domain adaptation using meta-learning. First, we want to reach domain robustness, i.e., we want to reach high quality on both domains seen in the training data and unseen domains. Second, we want our systems to be adaptive, i.e., making it possible to finetune systems with just hundreds of in-domain parallel sentences. We study the domain adaptability of meta-learning when improving the domain robustness of the model. In this paper, we propose a novel approach, RMLNMT (Robust Meta-Learning Framework for Neural Machine Translation Domain Adaptation), which improves the robustness of existing meta-learning models. More specifically, we show how to use a domain classifier in curriculum learning and we integrate the word-level domain mixing model into the meta-learning framework with a balanced sampling strategy. Experiments on English$\rightarrow$German and English$\rightarrow$Chinese translation show that RMLNMT improves in terms of both domain robustness and domain adaptability in seen and unseen domains. Our source code is available at https://github.com/lavine-lmu/RMLNMT.
翻译:我们考虑的是利用元学习来调整NMT领域适应的两个问题。 首先,我们想达到领域稳健度,即我们希望在培训数据和无形领域看到的两个领域都达到高质量。 其次,我们希望我们的系统具有适应性,即有可能微调系统,仅用数百个在部内平行的句子。我们在改进模型的域稳健度时研究元学习的域适度。在本文中,我们提出一种新颖的方法,即RMLNMT(神经机器转换 Domain适应的Robust Met-Learning框架),它能提高现有元学习模式的稳健性。更具体地说,我们展示了如何在课程学习中使用域分类器,并将字级域模型与平衡的抽样战略结合到元学习框架。关于英语和德语的实验显示,RMLNMMT在可见和看不见域域域域的域稳健性和域适应性两方面都有改进。我们的源代码可在https://github.com/lavine-lam/RMMTNMTN中查阅。