Pre-trained language models like BERT and its variants have recently achieved impressive performance in various natural language understanding tasks. However, BERT heavily relies on the global self-attention block and thus suffers large memory footprint and computation cost. Although all its attention heads query on the whole input sequence for generating the attention map from a global perspective, we observe some heads only need to learn local dependencies, which means the existence of computation redundancy. We therefore propose a novel span-based dynamic convolution to replace these self-attention heads to directly model local dependencies. The novel convolution heads, together with the rest self-attention heads, form a new mixed attention block that is more efficient at both global and local context learning. We equip BERT with this mixed attention design and build a ConvBERT model. Experiments have shown that ConvBERT significantly outperforms BERT and its variants in various downstream tasks, with lower training cost and fewer model parameters. Remarkably, ConvBERTbase model achieves 86.4 GLUE score, 0.7 higher than ELECTRAbase, while using less than 1/4 training cost. Code and pre-trained models will be released.
翻译:在各种自然语言理解任务中,如BERT及其变体等受过训练的语言模型最近取得了令人印象深刻的成绩。然而,BERT严重依赖全球自留区块,因此产生了巨大的记忆足迹和计算成本。尽管它的所有关注点头都询问整个输入序列,以便从全球角度生成关注地图,但我们观察到一些负责人只需要学习当地依赖性,这意味着存在计算冗余。因此,我们提议以新的跨基动态演进来取代这些自留区头直接模拟本地依赖性。新的革命头目与休息自留区头一起形成了一个新的混合关注块,在全球和地方背景下学习方面都更为有效。我们给BERT配备了这种混合关注设计,并建立了ConvBERT模型。实验表明,CONBERT在各种下游任务中明显地超越了BERT及其变体,培训成本较低,模型也较少。值得注意的是,ConBERTBase模型取得了86.4 GLUE分,比ELTRABase值高出0.7,同时使用不到四分之一的培训成本。