With the rapid discovery of emergent phenomena in deep learning and large language models, understanding their cause has become an urgent need. Here, we propose a rigorous entropic-force theory for understanding the learning dynamics of neural networks trained with stochastic gradient descent (SGD) and its variants. Building on the theory of parameter symmetries and an entropic loss landscape, we show that representation learning is crucially governed by emergent entropic forces arising from stochasticity and discrete-time updates. These forces systematically break continuous parameter symmetries and preserve discrete ones, leading to a series of gradient balance phenomena that resemble the equipartition property of thermal systems. These phenomena, in turn, (a) explain the universal alignment of neural representations between AI models and lead to a proof of the Platonic Representation Hypothesis, and (b) reconcile the seemingly contradictory observations of sharpness- and flatness-seeking behavior of deep learning optimization. Our theory and experiments demonstrate that a combination of entropic forces and symmetry breaking is key to understanding emergent phenomena in deep learning.
翻译:随着深度学习和大型语言模型中涌现现象的快速发现,理解其成因已成为迫切需求。本文提出了一种严格的熵力理论,用于理解通过随机梯度下降(SGD)及其变体训练的神经网络的学习动力学。基于参数对称性理论和熵损失景观,我们证明了表征学习关键地受由随机性和离散时间更新产生的涌现熵力所支配。这些力系统地破坏连续参数对称性并保持离散对称性,导致一系列梯度平衡现象,类似于热系统的能量均分特性。这些现象进而(a)解释了AI模型间神经表征的普适对齐,并导向了对柏拉图表征假设的证明;(b)调和了深度学习优化中看似矛盾的尖锐性与平坦性寻求行为。我们的理论和实验表明,熵力与对称性破缺的结合是理解深度学习中涌现现象的关键。