Large language models (LLMs) trained via KL-regularized reinforcement learning demonstrate strong instruction following, self-correction, and reasoning abilities. Yet their theoretical underpinnings remain limited. We exploit the closed-form energy-based model (EBM) structure of the optimal KL-regularized policy to provide a unified variational analysis of LLMs. For instruction-tuned models, under natural assumptions on reward potentials and pretraining symmetry, we prove that the transition kernel satisfies detailed balance with respect to a scalar potential encoding response quality. This yields monotonic KL convergence to a high-quality stationary distribution, bounded hitting times to superior states, and exponential mixing governed by the spectral gap. For reasoning models trained with verifiable rewards (RLVR), we show the objective is equivalent to expected KL minimization toward an optimal reasoning distribution, with the suboptimality gap reducing to the Bernoulli KL between target and current accuracies along the natural gradient flow. This helps explain empirical entropy-accuracy trade-offs.
翻译:通过KL正则化强化学习训练的大型语言模型展现出强大的指令跟随、自我修正和推理能力,但其理论基础仍显不足。我们利用最优KL正则化策略的闭式能量模型结构,为大型语言模型提供了统一的变分分析框架。对于指令微调模型,在奖励势函数与预训练对称性的自然假设下,我们证明其转移核满足关于编码响应质量的标量势函数的细致平衡条件。这推导出向高质量稳态分布的单调KL收敛性、到达更优状态的有界命中时间,以及由谱隙控制的指数混合速率。对于采用可验证奖励训练的推理模型,我们证明其目标函数等价于向最优推理分布的期望KL最小化过程,其次优性间隙可简化为自然梯度流上目标准确率与当前准确率间的伯努利KL散度。该分析为实证中观察到的熵-准确率权衡现象提供了理论解释。