The phenomenon of benign overfitting, where a predictor perfectly fits noisy training data while attaining low expected loss, has received much attention in recent years, but still remains not fully understood beyond well-specified linear regression setups. In this paper, we provide several new results on when one can or cannot expect benign overfitting to occur, for both regression and classification tasks. We consider a prototypical and rather generic data model for benign overfitting of linear predictors, where an arbitrary input distribution of some fixed dimension $k$ is concatenated with a high-dimensional distribution. For linear regression which is not necessarily well-specified, we show that the minimum-norm interpolating predictor (that standard training methods converge to) is biased towards an inconsistent solution in general, hence benign overfitting will generally not occur. Moreover, we show how this can be extended beyond standard linear regression, by an argument proving how the existence of benign overfitting on some regression problems precludes its existence on other regression problems. We then turn to classification problems, and show that the situation there is much more favorable. Specifically, we prove that the max-margin predictor (to which standard training methods are known to converge in direction) is asymptotically biased towards minimizing a weighted squared hinge loss. This allows us to reduce the question of benign overfitting in classification to the simpler question of whether this loss is a good surrogate for the misclassification error, and use it to show benign overfitting in some new settings.
翻译:良性超配现象,即一个预测者完全适合杂乱的培训数据,同时又能达到预期的低损失水平,近年来受到了很多关注,但除了明确的线性回归设置之外,仍然没有完全完全理解。在本文件中,我们为回归和分类任务提供了若干新的结果,说明在何时可以或不能期望出现良性超配时,就回归和分类任务而言,我们提供了若干新的结果。我们考虑的是,在超配线性预测器方面,一种典型的和相当通用的数据模型,即任意的某一固定尺寸的美元输入分配与高维度分布相融合。对于不一定很精确的线性回归,我们表明,最起码的诺性内插预测(标准培训方法相趋一致的)偏向于一种不一致的解决方案,因此一般不会出现良性过度的调整。此外,我们通过论证某些回归问题的存在是否优于某些回归问题,从而排除其存在其他回归问题。我们然后转向分类问题,并表明那里的情况更加有利。具体地说,我们证明,某种最接近性预测性预测和最差的混合的预测(也就是,这种标准性培训方法是否最接近于我们所知道的正统化的排序的排序的分类,是比重性损失更接近于我们更接近于正统的排序的问题)。