Multilingual pre-trained models have achieved remarkable performance on cross-lingual transfer learning. Some multilingual models such as mBERT, have been pre-trained on unlabeled corpora, therefore the embeddings of different languages in the models may not be aligned very well. In this paper, we aim to improve the zero-shot cross-lingual transfer performance by proposing a pre-training task named Word-Exchange Aligning Model (WEAM), which uses the statistical alignment information as the prior knowledge to guide cross-lingual word prediction. We evaluate our model on multilingual machine reading comprehension task MLQA and natural language interface task XNLI. The results show that WEAM can significantly improve the zero-shot performance.
翻译:多语言预先培训模式在跨语言转让学习方面取得了显著成绩,诸如MBERT等多语言模式在未贴标签的社团上接受了预先培训,因此,将不同语言嵌入模式可能不太一致。 在本文件中,我们的目标是通过提出名为Word-Exchange Airgain 模式(WEAM)的培训前任务来改进零点跨语言转让绩效,该模式使用统计协调信息作为先前的知识来指导跨语言的词汇预测。我们评估了多语言机器阅读任务 MLQA 和自然语言界面任务 XNLI 的模型。结果显示,WEAM可以显著改善零点效果。