The multilingual pre-trained language models (e.g, mBERT, XLM and XLM-R) have shown impressive performance on cross-lingual natural language understanding tasks. However, these models are computationally intensive and difficult to be deployed on resource-restricted devices. In this paper, we propose a simple yet effective distillation method (LightMBERT) for transferring the cross-lingual generalization ability of the multilingual BERT to a small student model. The experiment results empirically demonstrate the efficiency and effectiveness of LightMBERT, which is significantly better than the baselines and performs comparable to the teacher mBERT.
翻译:多语种预先培训的语言模式(例如,mBERT、XLM和XLM-R)在跨语种的自然语言理解任务方面表现出了令人印象深刻的业绩,然而,这些模式在计算上是密集的,难以在资源限制装置上部署,在本文件中,我们建议采用一种简单而有效的蒸馏方法(LightMBERT),将多语种BERT的跨语种概括能力转换成一个小型学生模式。实验结果从经验上证明了LightMBERT的效率和效力,大大高于基线,其表现与教师的 mBERT相似。