Deep neural networks generalize well despite being exceedingly overparameterized and being trained without explicit regularization. This curious phenomenon has inspired extensive research activity in establishing its statistical principles: Under what conditions is it observed? How do these depend on the data and on the training algorithm? When does regularization benefit generalization? While such questions remain wide open for deep neural nets, recent works have attempted gaining insights by studying simpler, often linear, models. Our paper contributes to this growing line of work by examining binary linear classification under a generative Gaussian mixture model. Motivated by recent results on the implicit bias of gradient descent, we study both max-margin SVM classifiers (corresponding to logistic loss) and min-norm interpolating classifiers (corresponding to least-squares loss). First, we leverage an idea introduced in [V. Muthukumar et al., arXiv:2005.08054, (2020)] to relate the SVM solution to the min-norm interpolating solution. Second, we derive novel non-asymptotic bounds on the classification error of the latter. Combining the two, we present novel sufficient conditions on the covariance spectrum and on the signal-to-noise ratio (SNR) under which interpolating estimators achieve asymptotically optimal performance as overparameterization increases. Interestingly, our results extend to a noisy model with constant probability noise flips. Contrary to previously studied discriminative data models, our results emphasize the crucial role of the SNR and its interplay with the data covariance. Finally, via a combination of analytical arguments and numerical demonstrations we identify conditions under which the interpolating estimator performs better than corresponding regularized estimates.
翻译:深心神经网络尽管被过分过分地过分地夸大,而且没有明确的规范化,却被广泛概括。这个奇怪的现象激发了广泛的研究活动,以建立其统计原则:在什么条件下观测到?这些是如何取决于数据和训练算法的?当正规化是否有利于一般化?当这类问题仍然对深神经网开放时,最近的工作试图通过研究更简单、往往是线性的模型来获得洞察力。我们的论文通过在基因化高斯混合模式下审查二进线分类,促进这一日益增长的工作。根据梯度下降隐含的偏差的最新结果,我们研究的是最高值 SVM 梯度分类器(在什么条件下观测到后勤损失?) 和微调中间分调分解器分类器(在最小度模型中研究中,我们利用了一种更简单化的直径直线性能分类方法, 也就是我们目前最精确的直径直性能分析结果, 以及我们目前最精确的直径直径直的直径直度分析结果, 和最精确的直径直径直径直径直径对等的数据分析结果,我们用一个更精确的解的直径直对等的直径直判的直判后, 和最精确的直判判的直判的直的直判的直判判判判判的比更精确的直的直判分解结果,我们更精确的直判的比更精确的比的直判判的直的直的直的比。