Pre-training text representations has recently been shown to significantly improve the state-of-the-art in many natural language processing tasks. The central goal of pre-training is to learn text representations that are useful for subsequent tasks. However, existing approaches are optimized by minimizing a proxy objective, such as the negative log likelihood of language modeling. In this work, we introduce a learning algorithm which directly optimizes model's ability to learn text representations for effective learning of downstream tasks. We show that there is an intrinsic connection between multi-task pre-training and model-agnostic meta-learning with a sequence of meta-train steps. The standard multi-task learning objective adopted in BERT is a special case of our learning algorithm where the depth of meta-train is zero. We study the problem in two settings: unsupervised pre-training and supervised pre-training with different pre-training objects to verify the generality of our approach.Experimental results show that our algorithm brings improvements and learns better initializations for a variety of downstream tasks.
翻译:培训前的文字表述方式最近表明,在许多自然语言处理任务中,培训前的文字表述方式大大改进了最新水平; 培训前的中心目标是学习对以后的任务有用的文字表述方式; 但是,通过尽量减少代用目标,例如语言建模的负日志可能性,优化了现有方法; 在这项工作中,我们引入了一种学习算法,直接优化模型学习文字表述方式的能力,以便有效学习下游任务; 我们表明,多任务前的训练与模式-不可知的元学习之间有着内在联系,并有一系列元培训步骤。 在BERT中采用的标准多任务学习目标是我们学习算法的特殊例子,在这种算法中,元培训深度为零。 我们研究了两个环境的问题:未经监督的训练前和受监督的训练前,用不同的训练对象来核实我们的方法的一般性。 我们的实验结果表明,我们的算法为各种下游任务带来了改进,并学习了更好的初始化。