This paper presents a new pre-trained language model, DeBERTaV3, which improves the original DeBERTa model by replacing mask language modeling (MLM) with replaced token detection (RTD), a more sample-efficient pre-training task. Our analysis shows that vanilla embedding sharing in ELECTRA hurts training efficiency and model performance. This is because the training losses of the discriminator and the generator pull token embeddings in different directions, creating the "tug-of-war" dynamics. We thus propose a new gradient-disentangled embedding sharing method that avoids the tug-of-war dynamics, improving both training efficiency and the quality of the pre-trained model. We have pre-trained DeBERTaV3 using the same settings as DeBERTa to demonstrate its exceptional performance on a wide range of downstream natural language understanding (NLU) tasks. Taking the GLUE benchmark with eight tasks as an example, the DeBERTaV3 Large model achieves a 91.37% average score, which is 1.37% over DeBERTa and 1.91% over ELECTRA, setting a new state-of-the-art (SOTA) among the models with a similar structure. Furthermore, we have pre-trained a multi-lingual model mDeBERTa and observed a larger improvement over strong baselines compared to English models. For example, the mDeBERTa Base achieves a 79.8% zero-shot cross-lingual accuracy on XNLI and a 3.6% improvement over XLM-R Base, creating a new SOTA on this benchmark. We have made our pre-trained models and inference code publicly available at https://github.com/microsoft/DeBERTa.
翻译:本文提出了一种新的预训练语言模型DeBERTaV3,它通过将掩码语言建模(MLM)替换为更加高效的预训练任务替代标记检测(RTD),以改进原始的DeBERTa模型。我们的分析表明,ELECTRA中的香草嵌入共享降低了训练效率和模型性能。这是因为鉴别器和生成器的训练损失在不同方向上拉动令牌嵌入,创造出“拔河”动力学。因此,我们提出了一种新的梯度分离嵌入共享方法,避免了“拔河”动力学,提高了预训练模型的质量和效率。我们采用与DeBERTa相同的设置对DeBERTaV3进行了预训练,以展示其在各种下游自然语言理解(NLU)任务中的杰出表现。以具有8个任务的GLUE基准测试为例,DeBERTaV3 Large模型实现了91.37%的平均分数,比DeBERTa和ELECTRA高1.37%和1.91%,在具有类似结构的模型中创造了新的最先进(SOTA)。此外,我们预先训练了多语种模型mDeBERTa,并观察到与英语模型相比,与强基线相比有更大的改进。例如,mDeBERTa Base在XNLI上实现了79.8%的零-shot跨语言准确度,并比XLM-R Base提高了3.6%,在这个基准测试上创造了新的SOTA。我们已经公开发布了预训练模型和推理代码,网址为https://github.com/microsoft/DeBERTa。