Pre-trained models have been used in many fields in recent years, ranging from natural language understanding to computer vision and natural language generation. However, the performance of these natural language generation models is overly dependent on the scale of the model and the size of the dataset. While the larger language model is excellent in some respects, it cannot learn up-to-date knowledge and is relatively difficult to relearn. In this paper, a new adversarial process learning method called Auto-Learning. This can improve the performance of any natural language generation model without the help of additional datasets. Auto-Learning includes two models: $G$ is a text generation model and $D$ can test whether the data generated by G is legitimate. Firstly, the fine-tuned $D$ model is used as the brain's knowledge base before the process. Then the text generated by the $G$ model is used as the input of $D$ to determine whether the text is legitimate or not. Finally, $G$ is fine-tuned according to the output of $D$. This adversarial process is like a self-escalation of the brain through some a priori knowledge. When this adversarial system wants to learn something new, simply fine-tune the $D$ model. Our approach applies to Autoregressive Language Modeling for all Transformer classes. The results are good in existing experimental tasks, including more grammatical text generation and better performance on some text comprehension tasks.
翻译:近年来,许多领域都采用了预先培训的模型,从自然语言理解到计算机视觉和自然语言生成等,从自然语言理解到计算机视觉和自然语言生成等。然而,这些自然语言生成模型的性能过于依赖模型的规模和数据集的大小。虽然较大的语言模型在某些方面是优秀的,但它无法学习最新知识,而且相对难以再读取。在本文件中,一种称为自动学习的新的对抗性程序学习方法,可以在没有额外数据集帮助的情况下改进任何自然语言生成模型的性能。自动学习包括两个模型:$G$是一个文本生成模型,$D可以测试G生成的数据是否合法。首先,微调的$D模式在这一过程之前被用作大脑的知识基础。然后,用$G美元模型生成的文本作为确定文本是否合法的投入。最后,$G$G美元根据$D的输出进行微调调整。这种对抗性能过程就像在某种前期的模型中自我调整大脑的某个部分,包括前期的版本。当我们的新版本应用时,要通过某种前期的文本学习一些新的文本。