Recent advances in neural architectures, such as the Transformer, coupled with the emergence of large-scale pre-trained models such as BERT, have revolutionized the field of Natural Language Processing (NLP), pushing the state of the art for a number of NLP tasks. A rich family of variations of these models has been proposed, such as RoBERTa, ALBERT, and XLNet, but fundamentally, they all remain limited in their ability to model certain kinds of information, and they cannot cope with certain information sources, which was easy for pre-existing models. Thus, here we aim to shed light on some important theoretical limitations of pre-trained BERT-style models that are inherent in the general Transformer architecture. First, we demonstrate in practice on two general types of tasks -- segmentation and segment labeling -- and on four datasets that these limitations are indeed harmful and that addressing them, even in some very simple and naive ways, can yield sizable improvements over vanilla RoBERTa and XLNet models. Then, we offer a more general discussion on desiderata for future additions to the Transformer architecture that would increase its expressiveness, which we hope could help in the design of the next generation of deep NLP architectures.
翻译:例如变异器等神经结构的最近进步,加上诸如BERT等大规模预先培训的模型的出现,使得自然语言处理(NLP)领域的自然语言处理(NLP)领域发生革命性革命,推动了一些NLP任务。有人提议了这些模型的丰富种类,如RoBERTA、ALBERT、ALBERT和XLNet等,但从根本上说,这些模型的丰富多样性大家庭,如RobERTA、ALBERT和XLNet等,但是,这些模型在神经结构方面提出了许多丰富的各种模型,这些模型在诸如变异性模型(如Roberta、ALBERT和XLNet等)方面都仍然能力有限,它们都无法适应某些信息源,而对于先前存在的模型来说,这些源是容易的。因此,我们在这里的目的是要阐明在一般变异器结构中固有的预先培训的BERTERT(NERT)型模型的一些重要的理论局限性。首先,我们在实践中展示了两种一般性任务 -- -- 分解和分段标签 -- -- 以及四个数据集集,这些限制确实有害,这些限制确实有害,即使以一些简单和天天天天天天,解决这些问题,处理这些限制的解决这些限制能够带来甚简单和天的模型的模型的模型的设计,我们希望可以帮助今后在N变换动结构的设计中今后增添N- 的新一代结构的设计结构中帮助下,在N- 的新一代设计中,我们希望帮助。