Increasing model size when pretraining natural language representations often results in improved performance on downstream tasks. However, at some point further model increases become harder due to GPU/TPU memory limitations, longer training times, and unexpected model degradation. To address these problems, we present two parameter-reduction techniques to lower memory consumption and increase the training speed of BERT. Comprehensive empirical evidence shows that our proposed methods lead to models that scale much better compared to the original BERT. We also use a self-supervised loss that focuses on modeling inter-sentence coherence, and show it consistently helps downstream tasks with multi-sentence inputs. As a result, our best model establishes new state-of-the-art results on the GLUE, RACE, and SQuAD benchmarks while having fewer parameters compared to BERT-large.The code and the pretrained models are available at https://github.com/google-research/google-research/tree/master/albert.
翻译:在对自然语言表现进行培训之前,当自然语言表现的模型规模增加时,往往会提高下游任务的业绩。然而,由于GPU/TPU的内存限制、培训时间的延长以及意外的模式退化,在某些时候,由于GPU/TPU的内存限制、培训时间的延长和意外的模式退化,进一步的模式增多变得更为困难。为了解决这些问题,我们提出了两个减少参数的技术,以降低记忆消耗,提高BERT的培训速度。综合经验证据表明,我们提出的方法导致模型的规模比原始的BERT要大得多。我们还使用一种自我监督的损失,重点是模拟相互判刑的一致性,并表明它一贯地帮助下游任务,提供多种感应力投入。因此,我们的最佳模型在GLUE、RACE和SQUAD基准方面确立了新的最新结果,而与BERT大号相比参数则较少。代码和预先培训的模型可在https://github.com/google-resear/google-resear-resear/tree/master/albert查阅。