Current deep neural networks are highly overparameterized (up to billions of connection weights) and nonlinear. Yet they can fit data almost perfectly through variants of gradient descent algorithms and achieve unexpected levels of prediction accuracy without overfitting. These are formidable results that escape the bias-variance predictions of statistical learning and pose conceptual challenges for non-convex optimization. In this paper, we use methods from statistical physics of disordered systems to analytically study the computational fallout of overparameterization in nonconvex neural network models. As the number of connection weights increases, we follow the changes of the geometrical structure of different minima of the error loss function and relate them to learning and generalisation performance. We find that there exist a gap between the SAT/UNSAT interpolation transition where solutions begin to exist and the point where algorithms start to find solutions, i.e. where accessible solutions appear. This second phase transition coincides with the discontinuous appearance of atypical solutions that are locally extremely entropic, i.e., flat regions of the weight space that are particularly solution-dense and have good generalization properties. Although exponentially rare compared to typical solutions (which are narrower and extremely difficult to sample), entropic solutions are accessible to the algorithms used in learning. We can characterize the generalization error of different solutions and optimize the Bayesian prediction, for data generated from a structurally different network. Numerical tests on observables suggested by the theory confirm that the scenario extends to realistic deep networks.
翻译:目前深心神经网络高度超度(高达数十亿个连接权重)和非线性网络。 然而,它们可以通过梯度下沉算法变量将数据几乎完美地配置成数据,不过分地实现出乎意料的预测准确性水平。 这些巨大的结果逃脱了统计学习的偏差预测,给非convex优化带来了概念挑战。 在本文中,我们使用从统计物理学系统混乱的系统来分析非convex神经网络模型中超度参数的计算结果。 随着连接权重的增加,我们跟踪不同错误损失网络的几何结构的变化,并将之与学习和概括性业绩联系起来。 我们发现,在SAT/UNSAT的内推论过渡中存在着差距,在解决办法开始出现的地方和算法开始找到解决办法的点之间存在差距。 第二阶段的过渡恰好与局部极端神经神经网络模型的不连贯外观性外观。 平板块空间的平板块结构空间结构结构结构化解决方案从特别的解析到典型的典型的精确性模型的精确性模型化之间,我们使用了一个非常罕见的对比的对比性测试数据。