Finding the optimal configuration of parameters in ResNet is a nonconvex minimization problem, but first-order methods nevertheless find the global optimum in the overparameterized regime. We study this phenomenon with mean-field analysis, by translating the training process of ResNet to a gradient-flow partial differential equation (PDE) and examining the convergence properties of this limiting process. The activation function is assumed to be $2$-homogeneous or partially $1$-homogeneous; the regularized ReLU satisfies the latter condition. We show that if the ResNet is sufficiently large, with depth and width depending algebraically on the accuracy and confidence levels, first-order optimization methods can find global minimizers that fit the training data.
翻译:在ResNet中找到最佳参数配置是一个非混凝土最小化问题,但第一阶方法仍然在过度参数化制度中找到全球最佳的参数。我们通过平均场分析研究这一现象,将ResNet的培训过程转换成梯度-流量部分差异方程式(PDE),并研究这一限制过程的趋同特性。启动功能假定为2美元同源或部分一美元同源;正规的ReLU满足后一种条件。我们表明,如果ResNet足够大,深度和广度取决于精确度和信任度的代数,则第阶优化方法可以找到符合培训数据的全球最小化工具。