As scaled language models (LMs) approach human-level reasoning capabilities, self-improvement emerges as a solution to synthesizing high-quality data corpus. While previous research has identified model collapse as a risk in self-improvement, where model outputs become increasingly deterministic, we discover a more fundamental challenge: the superficial self-improved reasoners phenomenon. In particular, our analysis reveals that even when LMs show improved in-domain (ID) reasoning accuracy, they actually compromise their generalized reasoning capabilities on out-of-domain (OOD) tasks due to memorization rather than genuine. Through a systematic investigation of LM architecture, we discover that during self-improvement, LM weight updates are concentrated in less reasoning-critical layers, leading to superficial learning. To address this, we propose Iterative Model Merging (IMM), a method that strategically combines weights from original and self-improved models to preserve generalization while incorporating genuine reasoning improvements. Our approach effectively mitigates both LM collapse and superficial learning, moving towards more stable self-improving systems.
翻译:随着规模化语言模型(LMs)接近人类水平的推理能力,自我改进成为合成高质量数据语料库的一种解决方案。尽管先前研究已识别出模型崩溃是自我改进中的一个风险,即模型输出变得越来越确定性,但我们发现了一个更为根本的挑战:表面自我改进推理器现象。具体而言,我们的分析揭示,即使语言模型在领域内(ID)推理准确性上表现出提升,它们实际上会因记忆而非真正的理解,损害其在领域外(OOD)任务上的泛化推理能力。通过对语言模型架构的系统性研究,我们发现,在自我改进过程中,语言模型的权重更新集中在推理关键性较低的层中,导致表面学习。为解决这一问题,我们提出迭代模型融合(IMM),这是一种策略性地结合原始模型与自我改进模型权重的方法,以在融入真正推理改进的同时保持泛化能力。我们的方法有效缓解了语言模型崩溃和表面学习,推动向更稳定的自我改进系统迈进。