This paper presents a new Unified pre-trained Language Model (UniLM) that can be fine-tuned for both natural language understanding and generation tasks. The model is pre-trained using three types of language modeling tasks: unidirectional, bidirectional, and sequence-to-sequence prediction. The unified modeling is achieved by employing a shared Transformer network and utilizing specific self-attention masks to control what context the prediction conditions on. UniLM compares favorably with BERT on the GLUE benchmark, and the SQuAD 2.0 and CoQA question answering tasks. Moreover, UniLM achieves new state-of-the-art results on five natural language generation datasets, including improving the CNN/DailyMail abstractive summarization ROUGE-L to 40.51 (2.04 absolute improvement), the Gigaword abstractive summarization ROUGE-L to 35.75 (0.86 absolute improvement), the CoQA generative question answering F1 score to 82.5 (37.1 absolute improvement), the SQuAD question generation BLEU-4 to 22.12 (3.75 absolute improvement), and the DSTC7 document-grounded dialog response generation NIST-4 to 2.67 (human performance is 2.65). The code and pre-trained models are available at https://github.com/microsoft/unilm.
翻译:本文介绍了一个新的统一、经过培训的语文模型(UniLM),该模型可以针对自然语言理解和生成任务进行微调,该模型使用三种语言模型任务进行预培训:单向、双向和顺序到顺序的预测。统一模型是通过使用共用的变压器网络和使用特定的自我注意面罩来控制预测条件而实现的。UniLM与GLUE基准上的BERT和SQAD 2.0和COQA问题回答任务相比,是优异的。此外,UniLM在五种自然语言生成数据集上取得了新的最新成果,包括改进CNN/DailyMail抽象的组合ROUGE-L至40.51(2.04 绝对改进)、Gigab 抽象的总结ROUGE-L至35.75(8.6 绝对改进)、COOA 回答F1至82.5分的精度问题(37.1 绝对改进)、SQADUA问题生成 BLEU-4至22.12(3.75绝对改进) 和DQOIST 生成2.CSU-SUBS-SUBSyal 的绝对改进。