In spite of their prevalence, the behaviour of Neural Networks when extrapolating far from the training distribution remains poorly understood, with existing results limited to specific cases. In this work, we prove general results -- the first of their kind -- by applying Neural Tangent Kernel (NTK) theory to analyse infinitely-wide neural networks trained until convergence and prove that the inclusion of just one Layer Norm (LN) fundamentally alters the induced NTK, transforming it into a bounded-variance kernel. As a result, the output of an infinitely wide network with at least one LN remains bounded, even on inputs far from the training data. In contrast, we show that a broad class of networks without LN can produce pathologically large outputs for certain inputs. We support these theoretical findings with empirical experiments on finite-width networks, demonstrating that while standard NNs often exhibit uncontrolled growth outside the training domain, a single LN layer effectively mitigates this instability. Finally, we explore real-world implications of this extrapolatory stability, including applications to predicting residue sizes in proteins larger than those seen during training and estimating age from facial images of underrepresented ethnicities absent from the training set.
翻译:尽管神经网络应用广泛,但其在训练分布范围外进行外推时的行为仍鲜为人知,现有研究仅限于特定案例。本研究首次通过应用神经正切核(NTK)理论分析收敛训练的无限宽神经网络,证明了仅需引入一层层归一化(LN)即可从根本上改变诱导的NTK,将其转化为有界方差核。因此,即使对远离训练数据的输入,包含至少一层LN的无限宽网络输出仍保持有界。相比之下,我们证明无LN的广泛网络类别可能对特定输入产生病态的大幅输出。我们通过对有限宽度网络的实证实验支持这些理论发现,证明标准神经网络在训练域外常呈现失控增长,而单层LN能有效缓解这种不稳定性。最后,我们探讨了这种外推稳定性的实际意义,包括在预测大于训练样本的蛋白质残基尺寸、以及从训练集中未包含的少数族裔面部图像估计年龄等任务中的应用。