Multilingual BERT (mBERT), a language model pre-trained on large multilingual corpora, has impressive zero-shot cross-lingual transfer capabilities and performs surprisingly well on zero-shot POS tagging and Named Entity Recognition (NER), as well as on cross-lingual model transfer. At present, the mainstream methods to solve the cross-lingual downstream tasks are always using the last transformer layer's output of mBERT as the representation of linguistic information. In this work, we explore the complementary property of lower layers to the last transformer layer of mBERT. A feature aggregation module based on an attention mechanism is proposed to fuse the information contained in different layers of mBERT. The experiments are conducted on four zero-shot cross-lingual transfer datasets, and the proposed method obtains performance improvements on key multilingual benchmark tasks XNLI (+1.5 %), PAWS-X (+2.4 %), NER (+1.2 F1), and POS (+1.5 F1). Through the analysis of the experimental results, we prove that the layers before the last layer of mBERT can provide extra useful information for cross-lingual downstream tasks and explore the interpretability of mBERT empirically.
翻译:目前,解决跨语言下游任务的主流方法总是使用最后一个变压层的 mBERT (mBERT) 输出语言信息。在这项工作中,我们探索下层至MBERT最后变压层的互补属性。基于关注机制的特征汇总模块建议整合 mBERT 不同层次的信息。实验在四套零射跨语言传输数据集上进行,拟议方法在XNLI(+1.5%)、PAWS-X(+2.4 %)、NER(+1.2 F1)和POS(+1.5 F1)等关键多语言基准任务上取得绩效改进。通过分析实验结果,我们证明在MBERT最后一层之前的层次可以为跨语言下游任务提供额外的有用信息,并探索MERT的可解释性。