Modern deep neural networks are highly over-parameterized compared to the data on which they are trained, yet they often generalize remarkably well. A flurry of recent work has asked: why do deep networks not overfit to their training data? In this work, we make a series of empirical observations that investigate and extend the hypothesis that deeper networks are inductively biased to find solutions with lower effective rank embeddings. We conjecture that this bias exists because the volume of functions that maps to low effective rank embedding increases with depth. We show empirically that our claim holds true on finite width linear and non-linear models on practical learning paradigms and show that on natural data, these are often the solutions that generalize well. We then show that the simplicity bias exists at both initialization and after training and is resilient to hyper-parameters and learning methods. We further demonstrate how linear over-parameterization of deep non-linear models can be used to induce low-rank bias, improving generalization performance on CIFAR and ImageNet without changing the modeling capacity.
翻译:现代深度神经网络相比于其训练数据高度超参数化,但它们经常表现出惊人的泛化能力。最近的一系列研究已经问道:为什么深度网络不会过度拟合其训练数据?在这项工作中,我们进行了一系列经验观察,研究和扩展假说,即更深的网络在归纳上倾向于寻找具有较低有效秩的嵌入解决方案。我们猜测这种偏好之所以存在是因为将函数体积映射到低有效秩嵌入的函数越深,它的数量就越多。我们通过实证证明了这种偏好在实际学习范例中适用于有限宽度的线性和非线性模型,并表明在自然数据上,这些通常是泛化能力好的解决方案。我们然后展示了简洁性偏好在初始化和训练后都存在,并且对超参数和学习方法具有韧性。此外,我们进一步展示了如何使用深度非线性模型的线性超参数化来引导低秩偏好,在不改变建模能力的情况下,提高了在CIFAR和ImageNet上的泛化性能。