The remarkable practical success of deep learning has revealed some major surprises from a theoretical perspective. In particular, simple gradient methods easily find near-optimal solutions to non-convex optimization problems, and despite giving a near-perfect fit to training data without any explicit effort to control model complexity, these methods exhibit excellent predictive accuracy. We conjecture that specific principles underlie these phenomena: that overparametrization allows gradient methods to find interpolating solutions, that these methods implicitly impose regularization, and that overparametrization leads to benign overfitting. We survey recent theoretical progress that provides examples illustrating these principles in simpler settings. We first review classical uniform convergence results and why they fall short of explaining aspects of the behavior of deep learning methods. We give examples of implicit regularization in simple settings, where gradient methods lead to minimal norm functions that perfectly fit the training data. Then we review prediction methods that exhibit benign overfitting, focusing on regression problems with quadratic loss. For these methods, we can decompose the prediction rule into a simple component that is useful for prediction and a spiky component that is useful for overfitting but, in a favorable setting, does not harm prediction accuracy. We focus specifically on the linear regime for neural networks, where the network can be approximated by a linear model. In this regime, we demonstrate the success of gradient flow, and we consider benign overfitting with two-layer networks, giving an exact asymptotic analysis that precisely demonstrates the impact of overparametrization. We conclude by highlighting the key challenges that arise in extending these insights to realistic deep learning settings.
翻译:深层学习的显著实际成功从理论角度揭示了一些重大惊喜。 特别是,简单的梯度方法很容易找到近乎最佳的解决非康维克斯优化问题的办法,尽管在不明显努力控制模型复杂性的情况下,使培训数据几乎完全适合,但这些方法的预测准确性极强。 我们推测,这些现象背后的具体原则:过度平衡使梯度方法能够找到内插式解决办法,这些方法隐含地强制规范化,而且过度平衡导致适中。我们调查了最近的理论进展,提供了在更简洁环境中说明这些原则的范例。我们首先审查了典型的统一趋同结果,以及为什么它们没有在解释深层学习方法的行为方面作出近乎完美的调整。我们举出了在简单环境中进行隐含的规范化规范化的例子,而在简单环境中,梯度方法导致最起码的规范功能,完全适合培训数据。然后我们审视了显示优优劣的梯度方法,侧重于回归问题。 对于这些方法,我们可以将预测规则分解成一个简单的组成部分,用于预测,用以在更简洁的环境下说明这些原则。 在精确的网络中,我们通过直线性分析中,我们将精准地展示了准确的网络的精确地展示,从而显示准确的周期的准确的准确的预测。