Recent probing studies reveal that large language models exhibit linear subspaces that separate true from false statements, yet the mechanism behind their emergence is unclear. We introduce a transparent, one-layer transformer toy model that reproduces such truth subspaces end-to-end and exposes one concrete route by which they can arise. We study one simple setting in which truth encoding can emerge: a data distribution where factual statements co-occur with other factual statements (and vice-versa), encouraging the model to learn this distinction in order to lower the LM loss on future tokens. We corroborate this pattern with experiments in pretrained language models. Finally, in the toy setting we observe a two-phase learning dynamic: networks first memorize individual factual associations in a few steps, then -- over a longer horizon -- learn to linearly separate true from false, which in turn lowers language-modeling loss. Together, these results provide both a mechanistic demonstration and an empirical motivation for how and why linear truth representations can emerge in language models.
翻译:最近的探测研究表明,大型语言模型展现出能够区分真假陈述的线性子空间,但其涌现机制尚不明确。我们引入了一个透明的单层Transformer玩具模型,该模型端到端地复现了此类真值子空间,并揭示了一种具体的涌现途径。我们研究了一个真值编码可能涌现的简单场景:数据分布中事实陈述与其他事实陈述共现(反之亦然),这促使模型学习这种区分以降低未来词元的语言建模损失。我们通过预训练语言模型的实验验证了这一模式。最后,在玩具场景中我们观察到两阶段学习动态:网络首先在少量步骤中记忆个体事实关联,随后在更长的训练周期中学会线性区分真伪,从而降低语言建模损失。这些结果共同为语言模型中线性真值表征如何及为何涌现提供了机制性演示和实证依据。