We study the generalization properties of minimum-norm solutions for three over-parametrized machine learning models including the random feature model, the two-layer neural network model and the residual network model. We proved that for all three models, the generalization error for the minimum-norm solution is comparable to the Monte Carlo rate, up to some logarithmic terms, as long as the models are sufficiently over-parametrized.
翻译:我们研究了三个过度平衡的机器学习模型(包括随机特征模型、两层神经网络模型和剩余网络模型)的最低限度北上解决方案的概括性特性。 我们证明,在所有这三种模型中,最低北上解决方案的概括性错误与蒙特卡洛利率相当,最高可达某些对数术语,只要这些模型足够超平衡。