For small training set sizes $P$, the generalization error of wide neural networks is well-approximated by the error of an infinite width neural network (NN), either in the kernel or mean-field/feature-learning regime. However, after a critical sample size $P^*$, we empirically find the finite-width network generalization becomes worse than that of the infinite width network. In this work, we empirically study the transition from infinite-width behavior to this variance limited regime as a function of sample size $P$ and network width $N$. We find that finite-size effects can become relevant for very small dataset sizes on the order of $P^* \sim \sqrt{N}$ for polynomial regression with ReLU networks. We discuss the source of these effects using an argument based on the variance of the NN's final neural tangent kernel (NTK). This transition can be pushed to larger $P$ by enhancing feature learning or by ensemble averaging the networks. We find that the learning curve for regression with the final NTK is an accurate approximation of the NN learning curve. Using this, we provide a toy model which also exhibits $P^* \sim \sqrt{N}$ scaling and has $P$-dependent benefits from feature learning.
翻译:对于小型培训设置大小的美元P$,宽度神经网络的普遍错误与无限宽度神经网络(NN)的错误十分接近,无论在内核还是中野/地文/地文学习系统中都是如此。然而,在关键抽样规模为$P$美元之后,我们从经验中发现,有限宽度网络的概括性比无限宽度网络更加糟糕。在这项工作中,我们通过经验研究从无限宽度行为向这种差异有限的制度转变的过渡,这种转变是建立在样本规模大小为$P$和网络宽度为$NNNN的函数函数函数。我们发现,对于以 $P ⁇ \ sim\sqrt{N} 的顺序排列的极小的数据集规模,一定规模效应可能变得适切。我们利用基于NNNT最后神经内核内核内核内核内核内核内核内核内核内核内核内核的变异性(NTK) 。我们发现,学习的回归性曲线也是一种精确的模型。