We present a new approach for pretraining a bi-directional transformer model that provides significant performance gains across a variety of language understanding problems. Our model solves a cloze-style word reconstruction task, where each word is ablated and must be predicted given the rest of the text. Experiments demonstrate large performance gains on GLUE and new state of the art results on NER as well as constituency parsing benchmarks, consistent with the concurrently introduced BERT model. We also present a detailed analysis of a number of factors that contribute to effective pretraining, including data domain and size, model capacity, and variations on the cloze objective.
翻译:我们提出了一个新的方法,用于对双向变压器模式进行预培训,该模式在各种语言理解问题中带来重大绩效收益。我们的模式解决了凝聚式的词重建任务,其中每个词都被稀释,必须根据其他文字加以预测。实验表明,GLUE的绩效收益很大,净入学率和选区划分基准的艺术成果新状况与同时引入的BERT模式一致。我们还详细分析了有助于有效培训前培训的若干因素,包括数据域和大小、模型容量和关于凝聚目标的变化。