We consider shallow (single hidden layer) neural networks and characterize their performance when trained with stochastic gradient descent as the number of hidden units $N$ and gradient descent steps grow to infinity. In particular, we investigate the effect of different scaling schemes, which lead to different normalizations of the neural network, on the network's statistical output, closing the gap between the $1/\sqrt{N}$ and the mean-field $1/N$ normalization. We develop an asymptotic expansion for the neural network's statistical output pointwise with respect to the scaling parameter as the number of hidden units grows to infinity. Based on this expansion we demonstrate mathematically that to leading order in $N$ there is no bias-variance trade off, in that both bias and variance (both explicitly characterized) decrease as the number of hidden units increases and time grows. In addition, we show that to leading order in $N$, the variance of the neural network's statistical output decays as the implied normalization by the scaling parameter approaches the mean field normalization. Numerical studies on the MNIST and CIFAR10 datasets show that test and train accuracy monotonically improve as the neural network's normalization gets closer to the mean field normalization.
翻译:我们考虑浅度(单隐性层)神经网络,在接受有关隐性梯度下降的培训时,将神经网络的性能定性为隐藏单位的数量($美元)和梯度下降步骤增长到无限。特别是,我们调查了不同规模计划的影响,导致神经网络的不同正常化,在网络的统计产出上缩小了1美元/ sqrt{N}美元与平均字段1/N美元之间的差额。我们开发了一个神经网络统计产出的无症状扩展,在随着隐藏单位数量增长到无限化而缩小的尺度参数方面,其规模值与缩放参数相近。基于这一扩展,我们从数学上表明,在以美元为主的情况下,不存在偏差交易,因为随着隐藏单位数量增加和时间增加,偏差和差异都有所减少。此外,我们显示,要以美元为主,神经网络的统计产出差异会随着隐含的标准化参数接近平均字段正常化,而逐渐下降。我们从数学角度表明,在以美元为主的正常化网络和CIFAR10的常规测试中,神经网络的数值研究将更接近于更精确度。