Language modeling on large-scale datasets leads to impressive performance gains on various downstream language tasks. The validation pre-training loss (or perplexity in autoregressive language modeling) is often used as the evaluation metric when developing language models since the pre-training loss tends to be well-correlated with downstream performance (which is itself difficult to evaluate comprehensively). Contrary to this conventional wisdom, this paper shows that 1) pre-training loss cannot fully explain downstream performance and 2) flatness of the model is well-correlated with downstream performance where pre-training loss is not. On simplified datasets, we identify three ways to produce models with the same (statistically optimal) pre-training loss but different downstream performance: continue pre-training after convergence, increasing the model size, and changing the training algorithm. These experiments demonstrate the existence of implicit bias of pre-training algorithms/optimizers -- among models with the same minimal pre-training loss, they implicitly prefer more transferable ones. Toward understanding this implicit bias, we prove that SGD with standard mini-batch noise implicitly prefers flatter minima in language models, and empirically observe a strong correlation between flatness and downstream performance among models with the same minimal pre-training loss. We also prove in a synthetic language setting that among the models with the minimal pre-training loss, the flattest model transfers to downstream tasks.
翻译:在大型数据集上建模的语言模型导致在各种下游语文任务上取得令人印象深刻的业绩收益。在培训前的认证损失(或自动递减语言模型的混乱性)往往被用作开发语言模型的评价衡量标准,因为培训前的损失往往与下游业绩密切相关(这本身难以全面评估)。与传统智慧相反,本文表明:(1) 培训前的损失不能充分解释下游业绩;(2) 培训前的损失与该模式的平整性在培训前损失不明显的情况下与下游业绩密切相关。在简化的数据集方面,我们确定三种方法,以同样的(统计上最理想的)培训前损失和不同的下游业绩制作模型:在培训前继续培训前的整合,增加模式的规模,改变培训算法。这些实验表明,在培训前算法/优化者之间存在着隐含的偏差,在培训前损失最小的模型中,它们隐含着更可转让的模式。为了理解这种隐含的偏差,我们证明,标准微批噪音SGD偏差性在语言模型中隐含自上最优的迷,在培训前损失前,但下游表现为最低的下游模式。在最低的模型中也以经验上证明,在最低损失模型之间确立了。