Language models demonstrate remarkable abilities when pre-trained on large text corpora and fine-tuned for specific tasks, but how and why pre-training shapes the success of the final model remains poorly understood. Notably, although pre-training success is often quantified by cross entropy loss, cross-entropy can be a poor predictor of downstream performance. Instead, we provide a theoretical perspective on this relationship through the lens of \emph{coverage}, which quantifies the probability mass the pre-trained model places on high-quality responses and which is necessary and sufficient for post-training and test-time scaling methods such as Best-of-N to succeed. Our main results develop an understanding of \emph{the coverage principle}, a phenomenon whereby next-token prediction implicitly optimizes toward a model with good coverage. In particular, we uncover a mechanism that explains the power of coverage in predicting downstream performance: \emph{coverage generalizes faster than cross entropy}, avoiding spurious dependence on problem-dependent parameters such as the sequence length. We also study practical algorithmic interventions with provable benefits for improving coverage, including (i) model/checkpoint selection procedures, (ii) gradient normalization schemes, and (iii) test-time decoding strategies.
翻译:语言模型在大规模文本语料上进行预训练并针对特定任务进行微调时展现出卓越能力,但预训练如何以及为何塑造最终模型成功的原因仍不甚明了。值得注意的是,尽管预训练的成功通常通过交叉熵损失来量化,但交叉熵往往难以有效预测下游性能。为此,我们通过“覆盖度”这一理论视角来阐释该关系——覆盖度量化了预训练模型赋予高质量响应的概率质量,对于后训练及测试时缩放方法(如Best-of-N)的成功既是必要条件也是充分条件。我们的核心成果建立了对“覆盖原则”的理论理解:该现象表明下一词元预测会隐式地优化模型以获得良好的覆盖度。特别地,我们揭示了一个解释覆盖度预测下游性能优势的机制:覆盖度比交叉熵具有更快的泛化速度,能够避免对序列长度等任务相关参数的虚假依赖。我们还研究了可证明能有效提升覆盖度的实际算法干预措施,包括:(i)模型/检查点选择流程,(ii)梯度归一化方案,以及(iii)测试时解码策略。