A recent line of works studied wide deep neural networks (DNNs) by approximating them as Gaussian Processes (GPs). A DNN trained with gradient flow was shown to map to a GP governed by the Neural Tangent Kernel (NTK), whereas earlier works showed that a DNN with an i.i.d. prior over its weights maps to the so-called Neural Network Gaussian Process (NNGP). Here we consider a DNN training protocol, involving noise, weight decay and finite width, whose outcome corresponds to a certain non-Gaussian stochastic process. An analytical framework is then introduced to analyze this non-Gaussian process, whose deviation from a GP is controlled by the finite width. Our contribution is three-fold: (i) In the infinite width limit, we establish a correspondence between DNNs trained with noisy gradients and the NNGP, not the NTK. (ii) We provide a general analytical form for the finite width correction (FWC) for DNNs with arbitrary activation functions and depth and use it to predict the outputs of empirical finite networks with high accuracy. Analyzing the FWC behavior as a function of $n$, the training set size, we find that it is negligible for both the very small $n$ regime, and, surprisingly, for the large $n$ regime (where the GP error scales as $O(1/n)$). (iii) We flesh out algebraically how these FWCs can improve the performance of finite convolutional neural networks (CNNs) relative to their GP counterparts on image classification tasks.
翻译:近期的一行工程通过近似Gausian进程(GPs)来研究广泛而深的神经网络(DNN) 。 一个经过梯度流训练的DNN用梯度流向一个由Neal Tangent Kernel(NTK)管理的GP地图显示,而早先的工程显示,一个带有i.i.d.的DNNN在向所谓的Neural网络(NNNGP)进行权重图之前,先是用I.i.d.d.,然后是给所谓的NEural网络(NNGP)的权重图。 这里我们考虑的是DNNNN培训协议,涉及噪音、重量1/衰减和有限宽度,其结果与某种非GNPs(FWC)的某种非GNP进程相匹配。 然后引入了一个分析框架来分析这个非GNNP进程,而后者的偏差由有限的宽度宽度控制。 我们的贡献是三:在无限宽度的宽度的宽度范围内的网络中,我们可以用它的精度分析表格来预测其精度值的精度值值值, 。