Multilingual pretrained language models have demonstrated remarkable zero-shot cross-lingual transfer capabilities. Such transfer emerges by fine-tuning on a task of interest in one language and evaluating on a distinct language, not seen during the fine-tuning. Despite promising results, we still lack a proper understanding of the source of this transfer. Using a novel layer ablation technique and analyses of the model's internal representations, we show that multilingual BERT, a popular multilingual language model, can be viewed as the stacking of two sub-networks: a multilingual encoder followed by a task-specific language-agnostic predictor. While the encoder is crucial for cross-lingual transfer and remains mostly unchanged during fine-tuning, the task predictor has little importance on the transfer and can be reinitialized during fine-tuning. We present extensive experiments with three distinct tasks, seventeen typologically diverse languages and multiple domains to support our hypothesis.
翻译:多语言预先培训的语文模式显示了显著的零点跨语言传输能力,这种传输是通过微调一种语言感兴趣的任务和对一种不同语言进行评价而出现的,在微调期间没有看到这一点。尽管取得了令人喜悦的结果,但我们仍然对这一转移的来源缺乏适当的理解。我们使用新颖的层次消化技术和对模式内部表述的分析,表明多语言的多语言语言模式多语种BERT(一种流行的多语言模式)可被视为两个子网络的堆叠:一个多语言编码器,然后是任务专用语言预测器。虽然编码器对于跨语言转移至关重要,在微调期间基本上保持不变,但任务预测器对转移没有多大重要性,在微调期间可以重新适应。我们提出了广泛的实验,有三种不同的任务,即七种不同的语言和多个领域,以支持我们的假设。