We propose a two-stage approach for training a single NMT model to translate unseen languages both to and from English. For the first stage, we initialize an encoder-decoder model to pretrained XLM-R and RoBERTa weights, then perform multilingual fine-tuning on parallel data in 40 languages to English. We find this model can generalize to zero-shot translations on unseen languages. For the second stage, we leverage this generalization ability to generate synthetic parallel data from monolingual datasets, then bidirectionally train with successive rounds of back-translation. Our approach, which we EcXTra (English-centric Crosslingual (X) Transfer), is conceptually simple, only using a standard cross-entropy objective throughout. It is also data-driven, sequentially leveraging auxiliary parallel data and monolingual data. We evaluate unsupervised NMT results for 7 low-resource languages, and find that each round of back-translation training further refines bidirectional performance. Our final single EcXTra-trained model achieves competitive translation performance in all translation directions, notably establishing a new state-of-the-art for English-to-Kazakh (22.9 > 10.4 BLEU). Our code is available at https://github.com/manestay/EcXTra .
翻译:我们提出了一种两阶段方法,用于训练单个 NMT 模型以将未见过的语言翻译成英语并从英语翻译。对于第一阶段,我们将编码器-解码器模型初始化为预训练的 XLM-R 和 RoBERTa 权重,然后在 40 种语言到英语的平行数据上进行多语言微调。我们发现这个模型可以推广到未见过的语言的零-shot 翻译。对于第二阶段,我们利用这种推广能力从单语数据集生成合成的平行数据,然后进行回译的连续双向训练。我们的方法,我们将其称为 EcXTra(English-centric Crosslingual (X) Transfer),在概念上非常简单,只在整个过程中使用标准的交叉熵目标。它也是数据驱动的,顺序地利用辅助平行数据和单语数据。我们评估了 7 种低资源语言的无监督 NMT 结果,并发现每轮回译训练进一步改进了双向性能。我们最终的单个 EcXTra 训练模型在所有翻译方向上都实现了有竞争力的翻译性能,显著地建立了英语到 哈萨克语(22.9 > 10.4 BLEU)的新的最好结果。我们的代码可在 https://github.com/manestay/EcXTra 找到。