Grokking, the unusual phenomenon for algorithmic datasets where generalization happens long after overfitting the training data, has remained elusive. We aim to understand grokking by analyzing the loss landscapes of neural networks, identifying the mismatch between training and test losses as the cause for grokking. We refer to this as the "LU mechanism" because training and test losses (against model weight norm) typically resemble "L" and "U", respectively. This simple mechanism can nicely explain many aspects of grokking: data size dependence, weight decay dependence, the emergence of representations, etc. Guided by the intuitive picture, we are able to induce grokking on tasks involving images, language and molecules. In the reverse direction, we are able to eliminate grokking for algorithmic datasets. We attribute the dramatic nature of grokking for algorithmic datasets to representation learning.
翻译:Grokking, 算法数据集的不寻常现象, 即超大培训数据后很久才普遍化的异常现象, 至今仍难以找到。 我们的目标是通过分析神经网络的损失场景来理解刻板化。 我们的目标是通过分析神经网络的损失场景来理解刻板化, 确定培训与测试损失之间的不匹配是刻板化的原因。 我们将此称为“ LU 机制 ”, 因为培训和测试损失( 相对于模型重量规范) 通常分别类似“ L” 和“ U ” 。 这个简单机制可以很好地解释刻板化的许多方面: 数据大小依赖性、 重量衰减依赖性、 出现表达方式等等。 在直观的图片的指导下, 我们能够在涉及图像、 语言 和 分子 的任务上进行刻板化 。 在相反的方向上, 我们能够消除算法数据集的刻板化。 我们把算法数据集的刻板化的剧变性质归为代表学习 。