Deep neural networks are highly expressive machine learning models with the ability to interpolate arbitrary datasets. Deep nets are typically optimized via first-order methods and the optimization process crucially depends on the characteristics of the network as well as the dataset. This work sheds light on the relation between the network size and the properties of the dataset with an emphasis on deep residual networks (ResNets). Our contribution is that if the network Jacobian is full rank, gradient descent for the quadratic loss and smooth activation converges to the global minima even if the network width $m$ of the ResNet scales linearly with the sample size $n$, and independently from the network depth. To the best of our knowledge, this is the first work which provides a theoretical guarantee for the convergence of neural networks in the $m=\Omega(n)$ regime.
翻译:深神经网络是高度直观的机器学习模型,能够对任意数据集进行内插。深网通常是通过一阶方法优化的,优化过程关键取决于网络和数据集的特性。这项工作揭示了网络规模与数据集属性之间的关系,重点是深残网络(ResNets ) 。我们的贡献是,如果Jacobian网络是全级的,四级损失和平稳引爆的梯度下降与全球微型相交汇,即使ResNet规模的网络宽度为百万美元,其样本规模为1美元,且独立于网络深度。据我们所知,这是首次从理论上保证神经网络在美元-Omega(n)制度下融合的工作。