Gradient descent (GD) type optimization methods are the standard instrument to train artificial neural networks (ANNs) with rectified linear unit (ReLU) activation. Despite the great success of GD type optimization methods in numerical simulations for the training of ANNs with ReLU activation, it remains - even in the simplest situation of the plain vanilla GD optimization method with random initializations and ANNs with one hidden layer - an open problem to prove (or disprove) the conjecture that the risk of the GD optimization method converges in the training of such ANNs to zero as the width of the ANNs, the number of independent random initializations, and the number of GD steps increase to infinity. In this article we prove this conjecture in the situation where the probability distribution of the input data is equivalent to the continuous uniform distribution on a compact interval, where the probability distributions for the random initializations of the ANN parameters are standard normal distributions, and where the target function under consideration is continuous and piecewise affine linear. Roughly speaking, the key ingredients in our mathematical convergence analysis are (i) to prove that suitable sets of global minima of the risk functions are \emph{twice continuously differentiable submanifolds of the ANN parameter spaces}, (ii) to prove that the Hessians of the risk functions on these sets of global minima satisfy an appropriate \emph{maximal rank condition}, and, thereafter, (iii) to apply the machinery in [Fehrman, B., Gess, B., Jentzen, A., Convergence rates for the stochastic gradient descent method for non-convex objective functions. J. Mach. Learn. Res. 21(136): 1--48, 2020] to establish convergence of the GD optimization method with random initializations.
翻译:梯度下降(GD) 优化方法是培训人造神经网络(ANNS)的标准工具。尽管GD类型优化方法在用 ReLU 激活培训ANNS的数值模拟中取得了巨大成功,但即使在使用随机初始化的普通香草GD优化方法的最简单情况下,它仍然是一个公开的问题,要证明(或否定) GD 优化方法的风险在培训此类ANNS的宽度、独立随机初始化的数量以及GD步骤的不精度方面达到零(ReLU) 。尽管GD类型优化方法在使用 ReLU 激活来培训ANNS的数值模拟中取得了巨大成功,但是在这样的情况下我们证明了这种推测:输入数据的概率分布相当于在压缩间隔的连续统一分布, 随机初始初始初始初始初始初始初始初始初始初始初始初始初始的ONNNE( 正常分布), 在考虑的目标函数中连续和直角直径直线递的OD。 大致地说, 在数学分析中的关键要素是亚基的数值。